Jul 6 23:11:21.896845 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:11:21.896872 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:11:21.896883 kernel: KASLR enabled Jul 6 23:11:21.896888 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 6 23:11:21.896894 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jul 6 23:11:21.896900 kernel: random: crng init done Jul 6 23:11:21.896907 kernel: secureboot: Secure boot disabled Jul 6 23:11:21.896912 kernel: ACPI: Early table checksum verification disabled Jul 6 23:11:21.896918 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 6 23:11:21.896926 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:11:21.896932 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896938 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896944 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896950 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896957 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896965 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896971 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896978 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896984 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:11:21.896990 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 6 23:11:21.896996 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 6 23:11:21.897002 kernel: NUMA: Failed to initialise from firmware Jul 6 23:11:21.897009 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 6 23:11:21.897015 kernel: NUMA: NODE_DATA [mem 0x13966d800-0x139672fff] Jul 6 23:11:21.897021 kernel: Zone ranges: Jul 6 23:11:21.897028 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:11:21.897034 kernel: DMA32 empty Jul 6 23:11:21.897040 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 6 23:11:21.897046 kernel: Movable zone start for each node Jul 6 23:11:21.897053 kernel: Early memory node ranges Jul 6 23:11:21.897059 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jul 6 23:11:21.897065 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jul 6 23:11:21.897071 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jul 6 23:11:21.897077 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 6 23:11:21.897083 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 6 23:11:21.897089 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 6 23:11:21.897095 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 6 23:11:21.897103 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 6 23:11:21.897109 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 6 23:11:21.897115 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 6 23:11:21.897124 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 6 23:11:21.897131 kernel: psci: probing for conduit method from ACPI. Jul 6 23:11:21.897137 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:11:21.897145 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:11:21.897152 kernel: psci: Trusted OS migration not required Jul 6 23:11:21.897158 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:11:21.897180 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:11:21.897187 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:11:21.897193 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:11:21.897200 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:11:21.897206 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:11:21.897213 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:11:21.897219 kernel: CPU features: detected: Hardware dirty bit management Jul 6 23:11:21.897229 kernel: CPU features: detected: Spectre-v4 Jul 6 23:11:21.897236 kernel: CPU features: detected: Spectre-BHB Jul 6 23:11:21.897242 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:11:21.897249 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:11:21.897255 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:11:21.897262 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:11:21.897268 kernel: alternatives: applying boot alternatives Jul 6 23:11:21.897276 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:11:21.897283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:11:21.897290 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:11:21.897296 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:11:21.897304 kernel: Fallback order for Node 0: 0 Jul 6 23:11:21.897311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 6 23:11:21.897317 kernel: Policy zone: Normal Jul 6 23:11:21.897324 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:11:21.897330 kernel: software IO TLB: area num 2. Jul 6 23:11:21.897337 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 6 23:11:21.897344 kernel: Memory: 3883824K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 212176K reserved, 0K cma-reserved) Jul 6 23:11:21.897351 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:11:21.897358 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:11:21.897365 kernel: rcu: RCU event tracing is enabled. Jul 6 23:11:21.897371 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:11:21.897378 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:11:21.897386 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:11:21.897393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:11:21.897399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:11:21.897406 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:11:21.897412 kernel: GICv3: 256 SPIs implemented Jul 6 23:11:21.897419 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:11:21.897425 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:11:21.897431 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:11:21.897438 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:11:21.897444 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:11:21.897451 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:11:21.897459 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:11:21.897466 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 6 23:11:21.897472 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 6 23:11:21.897479 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:11:21.897486 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:11:21.897492 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:11:21.897499 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:11:21.897505 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:11:21.897532 kernel: Console: colour dummy device 80x25 Jul 6 23:11:21.897539 kernel: ACPI: Core revision 20230628 Jul 6 23:11:21.897546 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:11:21.897556 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:11:21.897563 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:11:21.897570 kernel: landlock: Up and running. Jul 6 23:11:21.897577 kernel: SELinux: Initializing. Jul 6 23:11:21.897584 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:11:21.897591 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:11:21.897597 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:11:21.897604 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:11:21.897611 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:11:21.897620 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:11:21.897626 kernel: Platform MSI: ITS@0x8080000 domain created Jul 6 23:11:21.897633 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 6 23:11:21.897640 kernel: Remapping and enabling EFI services. Jul 6 23:11:21.897646 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:11:21.897653 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:11:21.897660 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:11:21.897667 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 6 23:11:21.897674 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:11:21.897682 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:11:21.897690 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:11:21.897701 kernel: SMP: Total of 2 processors activated. Jul 6 23:11:21.897709 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:11:21.897716 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:11:21.897724 kernel: CPU features: detected: Common not Private translations Jul 6 23:11:21.897730 kernel: CPU features: detected: CRC32 instructions Jul 6 23:11:21.897737 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:11:21.897745 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:11:21.897753 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:11:21.897760 kernel: CPU features: detected: Privileged Access Never Jul 6 23:11:21.897767 kernel: CPU features: detected: RAS Extension Support Jul 6 23:11:21.897774 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:11:21.897781 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:11:21.897788 kernel: alternatives: applying system-wide alternatives Jul 6 23:11:21.897795 kernel: devtmpfs: initialized Jul 6 23:11:21.897802 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:11:21.897811 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:11:21.897818 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:11:21.897825 kernel: SMBIOS 3.0.0 present. Jul 6 23:11:21.897832 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 6 23:11:21.897839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:11:21.897846 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:11:21.897854 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:11:21.897861 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:11:21.897868 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:11:21.897877 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jul 6 23:11:21.897884 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:11:21.897891 kernel: cpuidle: using governor menu Jul 6 23:11:21.897898 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:11:21.897905 kernel: ASID allocator initialised with 32768 entries Jul 6 23:11:21.897912 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:11:21.897919 kernel: Serial: AMBA PL011 UART driver Jul 6 23:11:21.897926 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:11:21.897933 kernel: Modules: 0 pages in range for non-PLT usage Jul 6 23:11:21.897942 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:11:21.897949 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:11:21.897956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:11:21.897963 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:11:21.897970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:11:21.897977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:11:21.897985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:11:21.897992 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:11:21.897998 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:11:21.898007 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:11:21.898014 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:11:21.898021 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:11:21.898028 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:11:21.898035 kernel: ACPI: Interpreter enabled Jul 6 23:11:21.898042 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:11:21.898049 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:11:21.898057 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:11:21.898064 kernel: printk: console [ttyAMA0] enabled Jul 6 23:11:21.898073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:11:21.898289 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:11:21.898372 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:11:21.898441 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:11:21.898507 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:11:21.898615 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:11:21.898625 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:11:21.898638 kernel: PCI host bridge to bus 0000:00 Jul 6 23:11:21.898713 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:11:21.898775 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:11:21.898835 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:11:21.898903 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:11:21.899001 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 6 23:11:21.899090 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 6 23:11:21.899180 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 6 23:11:21.899253 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 6 23:11:21.899327 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.899395 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 6 23:11:21.899469 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.899558 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 6 23:11:21.899661 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.899732 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 6 23:11:21.899812 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.899878 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 6 23:11:21.899961 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.900028 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 6 23:11:21.900105 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.900318 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 6 23:11:21.900421 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.900490 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 6 23:11:21.900592 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.900665 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 6 23:11:21.900748 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:11:21.900817 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 6 23:11:21.900892 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 6 23:11:21.900964 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 6 23:11:21.901045 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:11:21.901119 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 6 23:11:21.901215 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:11:21.901291 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 6 23:11:21.901367 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 6 23:11:21.901436 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 6 23:11:21.901897 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 6 23:11:21.902001 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 6 23:11:21.902073 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 6 23:11:21.902205 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 6 23:11:21.902298 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 6 23:11:21.902379 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 6 23:11:21.902450 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jul 6 23:11:21.902548 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 6 23:11:21.902632 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 6 23:11:21.902703 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 6 23:11:21.902779 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 6 23:11:21.902864 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:11:21.902933 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 6 23:11:21.903003 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 6 23:11:21.903071 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 6 23:11:21.903146 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:11:21.903486 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 6 23:11:21.903644 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 6 23:11:21.903722 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:11:21.903791 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:11:21.903861 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 6 23:11:21.903933 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:11:21.904001 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 6 23:11:21.904079 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 6 23:11:21.904155 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:11:21.904247 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 6 23:11:21.904316 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:11:21.904401 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 6 23:11:21.904481 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 6 23:11:21.905969 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jul 6 23:11:21.906077 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 6 23:11:21.906204 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 6 23:11:21.906277 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 6 23:11:21.906351 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:11:21.906419 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 6 23:11:21.906486 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 6 23:11:21.907632 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:11:21.907762 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 6 23:11:21.907849 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 6 23:11:21.907924 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:11:21.907996 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 6 23:11:21.908065 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 6 23:11:21.908135 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 6 23:11:21.908232 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 6 23:11:21.908309 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 6 23:11:21.908387 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 6 23:11:21.908461 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 6 23:11:21.908554 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 6 23:11:21.908645 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 6 23:11:21.908723 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 6 23:11:21.908801 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 6 23:11:21.908869 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 6 23:11:21.909034 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 6 23:11:21.909126 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 6 23:11:21.909259 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 6 23:11:21.909431 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 6 23:11:21.909645 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 6 23:11:21.909761 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 6 23:11:21.909853 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 6 23:11:21.909948 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 6 23:11:21.910050 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 6 23:11:21.910125 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 6 23:11:21.910256 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 6 23:11:21.910327 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 6 23:11:21.910396 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 6 23:11:21.910482 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 6 23:11:21.912095 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 6 23:11:21.912195 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 6 23:11:21.912269 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 6 23:11:21.912334 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 6 23:11:21.912401 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 6 23:11:21.912467 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 6 23:11:21.912557 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 6 23:11:21.912630 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 6 23:11:21.912719 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 6 23:11:21.912786 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 6 23:11:21.912853 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 6 23:11:21.912920 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 6 23:11:21.912986 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 6 23:11:21.913051 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 6 23:11:21.913121 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 6 23:11:21.913254 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 6 23:11:21.913394 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:11:21.913470 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 6 23:11:21.913555 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 6 23:11:21.913626 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 6 23:11:21.913693 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 6 23:11:21.913760 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 6 23:11:21.913853 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 6 23:11:21.913963 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 6 23:11:21.914059 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 6 23:11:21.914184 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 6 23:11:21.917036 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 6 23:11:21.917194 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 6 23:11:21.917295 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 6 23:11:21.917373 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 6 23:11:21.917443 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 6 23:11:21.917535 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 6 23:11:21.917608 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 6 23:11:21.917689 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 6 23:11:21.917775 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 6 23:11:21.917869 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 6 23:11:21.917966 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 6 23:11:21.918061 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 6 23:11:21.918142 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 6 23:11:21.918265 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jul 6 23:11:21.918342 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 6 23:11:21.918414 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 6 23:11:21.918482 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 6 23:11:21.918768 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 6 23:11:21.918935 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 6 23:11:21.919014 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 6 23:11:21.919087 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 6 23:11:21.919156 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 6 23:11:21.919248 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 6 23:11:21.919336 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 6 23:11:21.919429 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 6 23:11:21.919537 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 6 23:11:21.919619 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 6 23:11:21.919690 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 6 23:11:21.919754 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 6 23:11:21.919819 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 6 23:11:21.919887 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 6 23:11:21.919956 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 6 23:11:21.920024 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 6 23:11:21.920113 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 6 23:11:21.920275 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 6 23:11:21.920360 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 6 23:11:21.920429 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 6 23:11:21.920498 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 6 23:11:21.922709 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 6 23:11:21.922828 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:11:21.922899 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:11:21.922962 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:11:21.923052 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 6 23:11:21.923123 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 6 23:11:21.923224 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 6 23:11:21.923320 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 6 23:11:21.923406 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 6 23:11:21.923473 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 6 23:11:21.923660 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 6 23:11:21.923749 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 6 23:11:21.923810 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 6 23:11:21.923884 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 6 23:11:21.923950 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 6 23:11:21.924013 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 6 23:11:21.924090 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 6 23:11:21.924170 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 6 23:11:21.924247 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 6 23:11:21.924321 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 6 23:11:21.924385 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 6 23:11:21.924467 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 6 23:11:21.925014 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 6 23:11:21.925102 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 6 23:11:21.925228 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 6 23:11:21.925326 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 6 23:11:21.925421 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 6 23:11:21.925494 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 6 23:11:21.925679 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 6 23:11:21.925759 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 6 23:11:21.925828 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 6 23:11:21.925838 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:11:21.925846 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:11:21.925854 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:11:21.925862 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:11:21.925874 kernel: iommu: Default domain type: Translated Jul 6 23:11:21.925882 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:11:21.925890 kernel: efivars: Registered efivars operations Jul 6 23:11:21.925898 kernel: vgaarb: loaded Jul 6 23:11:21.925906 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:11:21.925914 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:11:21.925923 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:11:21.925931 kernel: pnp: PnP ACPI init Jul 6 23:11:21.926012 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:11:21.926027 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:11:21.926035 kernel: NET: Registered PF_INET protocol family Jul 6 23:11:21.926045 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:11:21.926054 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:11:21.926062 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:11:21.926070 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:11:21.926077 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:11:21.926085 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:11:21.926092 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:11:21.926102 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:11:21.926110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:11:21.926217 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 6 23:11:21.926231 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:11:21.926239 kernel: kvm [1]: HYP mode not available Jul 6 23:11:21.926247 kernel: Initialise system trusted keyrings Jul 6 23:11:21.926255 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:11:21.926263 kernel: Key type asymmetric registered Jul 6 23:11:21.926271 kernel: Asymmetric key parser 'x509' registered Jul 6 23:11:21.926282 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:11:21.926290 kernel: io scheduler mq-deadline registered Jul 6 23:11:21.926298 kernel: io scheduler kyber registered Jul 6 23:11:21.926306 kernel: io scheduler bfq registered Jul 6 23:11:21.926314 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:11:21.926392 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 6 23:11:21.926468 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 6 23:11:21.926696 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.926785 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 6 23:11:21.926857 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 6 23:11:21.926928 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.927000 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 6 23:11:21.927071 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 6 23:11:21.927138 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.927239 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 6 23:11:21.927313 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 6 23:11:21.927382 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.927455 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 6 23:11:21.927555 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 6 23:11:21.927631 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.927725 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 6 23:11:21.927801 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 6 23:11:21.927870 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.927944 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 6 23:11:21.928017 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 6 23:11:21.928088 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.928174 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 6 23:11:21.928256 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 6 23:11:21.928325 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.928336 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 6 23:11:21.928407 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 6 23:11:21.928480 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 6 23:11:21.929143 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:11:21.929183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:11:21.929193 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:11:21.929201 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:11:21.929295 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 6 23:11:21.929378 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 6 23:11:21.929390 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:11:21.929398 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:11:21.929491 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 6 23:11:21.929502 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 6 23:11:21.929526 kernel: thunder_xcv, ver 1.0 Jul 6 23:11:21.929538 kernel: thunder_bgx, ver 1.0 Jul 6 23:11:21.929546 kernel: nicpf, ver 1.0 Jul 6 23:11:21.929553 kernel: nicvf, ver 1.0 Jul 6 23:11:21.929648 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:11:21.929717 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:11:21 UTC (1751843481) Jul 6 23:11:21.929730 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:11:21.929739 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 6 23:11:21.929747 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:11:21.929755 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:11:21.929763 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:11:21.929771 kernel: Segment Routing with IPv6 Jul 6 23:11:21.929779 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:11:21.929787 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:11:21.929794 kernel: Key type dns_resolver registered Jul 6 23:11:21.929804 kernel: registered taskstats version 1 Jul 6 23:11:21.929811 kernel: Loading compiled-in X.509 certificates Jul 6 23:11:21.929819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:11:21.929827 kernel: Key type .fscrypt registered Jul 6 23:11:21.929835 kernel: Key type fscrypt-provisioning registered Jul 6 23:11:21.929843 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:11:21.929851 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:11:21.929858 kernel: ima: No architecture policies found Jul 6 23:11:21.929866 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:11:21.929876 kernel: clk: Disabling unused clocks Jul 6 23:11:21.929885 kernel: Freeing unused kernel memory: 38336K Jul 6 23:11:21.929892 kernel: Run /init as init process Jul 6 23:11:21.929900 kernel: with arguments: Jul 6 23:11:21.929908 kernel: /init Jul 6 23:11:21.929915 kernel: with environment: Jul 6 23:11:21.929922 kernel: HOME=/ Jul 6 23:11:21.929929 kernel: TERM=linux Jul 6 23:11:21.929937 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:11:21.929948 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:11:21.929960 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:11:21.929969 systemd[1]: Detected virtualization kvm. Jul 6 23:11:21.929977 systemd[1]: Detected architecture arm64. Jul 6 23:11:21.929985 systemd[1]: Running in initrd. Jul 6 23:11:21.929994 systemd[1]: No hostname configured, using default hostname. Jul 6 23:11:21.930003 systemd[1]: Hostname set to . Jul 6 23:11:21.930012 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:11:21.930021 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:11:21.930030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:11:21.930038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:11:21.930046 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:11:21.930055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:11:21.930064 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:11:21.930074 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:11:21.930083 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:11:21.930092 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:11:21.930101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:11:21.930110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:11:21.930118 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:11:21.930126 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:11:21.930135 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:11:21.930145 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:11:21.930153 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:11:21.930210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:11:21.930220 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:11:21.930229 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:11:21.930237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:11:21.930246 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:11:21.930255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:11:21.930264 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:11:21.930275 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:11:21.930283 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:11:21.930291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:11:21.930299 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:11:21.930309 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:11:21.930318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:11:21.930327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:11:21.930335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:11:21.930346 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:11:21.930355 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:11:21.930364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:11:21.930405 systemd-journald[236]: Collecting audit messages is disabled. Jul 6 23:11:21.930428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:11:21.930438 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:11:21.930446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:11:21.930455 kernel: Bridge firewalling registered Jul 6 23:11:21.930465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:11:21.930474 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:11:21.930484 systemd-journald[236]: Journal started Jul 6 23:11:21.930503 systemd-journald[236]: Runtime Journal (/run/log/journal/025bf8325ef44b99bb80bc4e0f3c7eaa) is 8M, max 76.6M, 68.6M free. Jul 6 23:11:21.934151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:11:21.886328 systemd-modules-load[237]: Inserted module 'overlay' Jul 6 23:11:21.914758 systemd-modules-load[237]: Inserted module 'br_netfilter' Jul 6 23:11:21.936923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:11:21.940876 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:11:21.950265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:11:21.954478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:11:21.959571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:11:21.966761 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:11:21.968725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:11:21.970860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:11:21.980789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:11:21.988548 dracut-cmdline[271]: dracut-dracut-053 Jul 6 23:11:21.991276 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:11:22.020621 systemd-resolved[278]: Positive Trust Anchors: Jul 6 23:11:22.021257 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:11:22.021290 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:11:22.031384 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 6 23:11:22.033151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:11:22.033873 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:11:22.077549 kernel: SCSI subsystem initialized Jul 6 23:11:22.082569 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:11:22.094593 kernel: iscsi: registered transport (tcp) Jul 6 23:11:22.111574 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:11:22.111662 kernel: QLogic iSCSI HBA Driver Jul 6 23:11:22.167061 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:11:22.172688 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:11:22.194073 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:11:22.194204 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:11:22.194238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:11:22.249584 kernel: raid6: neonx8 gen() 15558 MB/s Jul 6 23:11:22.266581 kernel: raid6: neonx4 gen() 15602 MB/s Jul 6 23:11:22.283572 kernel: raid6: neonx2 gen() 13082 MB/s Jul 6 23:11:22.300590 kernel: raid6: neonx1 gen() 10406 MB/s Jul 6 23:11:22.317574 kernel: raid6: int64x8 gen() 6728 MB/s Jul 6 23:11:22.334569 kernel: raid6: int64x4 gen() 7211 MB/s Jul 6 23:11:22.351568 kernel: raid6: int64x2 gen() 6027 MB/s Jul 6 23:11:22.368586 kernel: raid6: int64x1 gen() 4986 MB/s Jul 6 23:11:22.368666 kernel: raid6: using algorithm neonx4 gen() 15602 MB/s Jul 6 23:11:22.385594 kernel: raid6: .... xor() 12317 MB/s, rmw enabled Jul 6 23:11:22.385684 kernel: raid6: using neon recovery algorithm Jul 6 23:11:22.390727 kernel: xor: measuring software checksum speed Jul 6 23:11:22.390789 kernel: 8regs : 21590 MB/sec Jul 6 23:11:22.390813 kernel: 32regs : 21693 MB/sec Jul 6 23:11:22.390848 kernel: arm64_neon : 27965 MB/sec Jul 6 23:11:22.391546 kernel: xor: using function: arm64_neon (27965 MB/sec) Jul 6 23:11:22.443557 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:11:22.458596 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:11:22.464856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:11:22.480649 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jul 6 23:11:22.484764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:11:22.495731 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:11:22.513468 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 6 23:11:22.556406 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:11:22.561779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:11:22.614312 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:11:22.621957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:11:22.643470 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:11:22.646674 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:11:22.647912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:11:22.650397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:11:22.655715 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:11:22.682567 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:11:22.737311 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:11:22.744687 kernel: ACPI: bus type USB registered Jul 6 23:11:22.744745 kernel: usbcore: registered new interface driver usbfs Jul 6 23:11:22.744766 kernel: usbcore: registered new interface driver hub Jul 6 23:11:22.744777 kernel: usbcore: registered new device driver usb Jul 6 23:11:22.750588 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:11:22.750674 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 6 23:11:22.766437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:11:22.767714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:11:22.768740 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:11:22.773142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:11:22.773355 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:11:22.778097 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:11:22.782658 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:11:22.784549 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 6 23:11:22.784856 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 6 23:11:22.785043 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:11:22.786559 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 6 23:11:22.786856 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 6 23:11:22.789607 kernel: hub 1-0:1.0: USB hub found Jul 6 23:11:22.789837 kernel: hub 1-0:1.0: 4 ports detected Jul 6 23:11:22.789936 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 6 23:11:22.790902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:11:22.799532 kernel: hub 2-0:1.0: USB hub found Jul 6 23:11:22.803560 kernel: hub 2-0:1.0: 4 ports detected Jul 6 23:11:22.805437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:11:22.819534 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 6 23:11:22.819870 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 6 23:11:22.819978 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:11:22.819990 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:11:22.822770 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:11:22.847076 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 6 23:11:22.847337 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 6 23:11:22.845811 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:11:22.848606 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 6 23:11:22.848770 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 6 23:11:22.848858 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:11:22.856992 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:11:22.857084 kernel: GPT:17805311 != 80003071 Jul 6 23:11:22.857104 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:11:22.857123 kernel: GPT:17805311 != 80003071 Jul 6 23:11:22.857141 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:11:22.857229 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:11:22.857977 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 6 23:11:22.910585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (507) Jul 6 23:11:22.916551 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (504) Jul 6 23:11:22.934475 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 6 23:11:22.948413 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 6 23:11:22.957186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:11:22.964285 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 6 23:11:22.965036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 6 23:11:22.977856 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:11:22.983959 disk-uuid[577]: Primary Header is updated. Jul 6 23:11:22.983959 disk-uuid[577]: Secondary Entries is updated. Jul 6 23:11:22.983959 disk-uuid[577]: Secondary Header is updated. Jul 6 23:11:22.991549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:11:23.031585 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 6 23:11:23.169084 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 6 23:11:23.169140 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 6 23:11:23.170071 kernel: usbcore: registered new interface driver usbhid Jul 6 23:11:23.170096 kernel: usbhid: USB HID core driver Jul 6 23:11:23.275906 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 6 23:11:23.406733 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 6 23:11:23.460626 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 6 23:11:24.007587 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:11:24.009484 disk-uuid[578]: The operation has completed successfully. Jul 6 23:11:24.070451 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:11:24.070592 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:11:24.107930 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:11:24.113410 sh[592]: Success Jul 6 23:11:24.131554 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:11:24.195561 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:11:24.204818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:11:24.206559 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:11:24.241561 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:11:24.241634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:11:24.241653 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:11:24.241683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:11:24.241703 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:11:24.250565 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:11:24.252905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:11:24.254226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:11:24.259778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:11:24.263701 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:11:24.284563 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:11:24.284632 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:11:24.284644 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:11:24.288544 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:11:24.288605 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:11:24.294561 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:11:24.297006 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:11:24.302774 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:11:24.405350 ignition[682]: Ignition 2.20.0 Jul 6 23:11:24.405364 ignition[682]: Stage: fetch-offline Jul 6 23:11:24.407645 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:11:24.405400 ignition[682]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:24.405408 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:24.405613 ignition[682]: parsed url from cmdline: "" Jul 6 23:11:24.405616 ignition[682]: no config URL provided Jul 6 23:11:24.405620 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:11:24.405628 ignition[682]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:11:24.405633 ignition[682]: failed to fetch config: resource requires networking Jul 6 23:11:24.405833 ignition[682]: Ignition finished successfully Jul 6 23:11:24.415585 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:11:24.422856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:11:24.456230 systemd-networkd[776]: lo: Link UP Jul 6 23:11:24.456241 systemd-networkd[776]: lo: Gained carrier Jul 6 23:11:24.458063 systemd-networkd[776]: Enumeration completed Jul 6 23:11:24.458485 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:24.458490 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:11:24.458774 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:11:24.459117 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:24.459121 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:11:24.459816 systemd-networkd[776]: eth0: Link UP Jul 6 23:11:24.459819 systemd-networkd[776]: eth0: Gained carrier Jul 6 23:11:24.459826 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:24.460826 systemd[1]: Reached target network.target - Network. Jul 6 23:11:24.461926 systemd-networkd[776]: eth1: Link UP Jul 6 23:11:24.461930 systemd-networkd[776]: eth1: Gained carrier Jul 6 23:11:24.461937 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:24.471113 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:11:24.485219 ignition[779]: Ignition 2.20.0 Jul 6 23:11:24.485232 ignition[779]: Stage: fetch Jul 6 23:11:24.485418 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:24.485428 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:24.485535 ignition[779]: parsed url from cmdline: "" Jul 6 23:11:24.485539 ignition[779]: no config URL provided Jul 6 23:11:24.485544 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:11:24.485552 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:11:24.485638 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 6 23:11:24.486526 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 6 23:11:24.494643 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:11:24.520634 systemd-networkd[776]: eth0: DHCPv4 address 49.13.31.190/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:11:24.687609 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 6 23:11:24.693606 ignition[779]: GET result: OK Jul 6 23:11:24.693735 ignition[779]: parsing config with SHA512: 6c3d457b8c296c323c8635cf2f6817ab9ef83ac0f1314089a194d4f0008923fd3c63559e850aaba457694e3c29ee0eb154f3632cf6d7a55f172a5d9d0698a5a1 Jul 6 23:11:24.700084 unknown[779]: fetched base config from "system" Jul 6 23:11:24.702066 ignition[779]: fetch: fetch complete Jul 6 23:11:24.700097 unknown[779]: fetched base config from "system" Jul 6 23:11:24.702079 ignition[779]: fetch: fetch passed Jul 6 23:11:24.700103 unknown[779]: fetched user config from "hetzner" Jul 6 23:11:24.702221 ignition[779]: Ignition finished successfully Jul 6 23:11:24.706170 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:11:24.712805 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:11:24.731714 ignition[786]: Ignition 2.20.0 Jul 6 23:11:24.731730 ignition[786]: Stage: kargs Jul 6 23:11:24.732008 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:24.732037 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:24.734177 ignition[786]: kargs: kargs passed Jul 6 23:11:24.734293 ignition[786]: Ignition finished successfully Jul 6 23:11:24.736331 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:11:24.744873 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:11:24.758430 ignition[794]: Ignition 2.20.0 Jul 6 23:11:24.758445 ignition[794]: Stage: disks Jul 6 23:11:24.758875 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:24.758897 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:24.760434 ignition[794]: disks: disks passed Jul 6 23:11:24.760502 ignition[794]: Ignition finished successfully Jul 6 23:11:24.762485 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:11:24.763346 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:11:24.764231 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:11:24.766452 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:11:24.767190 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:11:24.768646 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:11:24.773738 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:11:24.794019 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:11:24.799136 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:11:24.805032 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:11:24.858553 kernel: EXT4-fs (sda9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:11:24.859309 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:11:24.860390 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:11:24.869705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:11:24.873260 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:11:24.883541 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (811) Jul 6 23:11:24.884748 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:11:24.884780 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:11:24.886667 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:11:24.887322 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:11:24.890281 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:11:24.887358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:11:24.892675 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:11:24.901323 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:11:24.901378 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:11:24.901576 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:11:24.908092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:11:24.946722 coreos-metadata[813]: Jul 06 23:11:24.946 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 6 23:11:24.948902 coreos-metadata[813]: Jul 06 23:11:24.948 INFO Fetch successful Jul 6 23:11:24.951232 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:11:24.952111 coreos-metadata[813]: Jul 06 23:11:24.952 INFO wrote hostname ci-4230-2-1-3-0a35d13a56 to /sysroot/etc/hostname Jul 6 23:11:24.953897 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:11:24.960639 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:11:24.964493 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:11:24.969354 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:11:25.070496 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:11:25.075677 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:11:25.078737 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:11:25.087540 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:11:25.108929 ignition[929]: INFO : Ignition 2.20.0 Jul 6 23:11:25.109805 ignition[929]: INFO : Stage: mount Jul 6 23:11:25.110892 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:25.112544 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:25.111986 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:11:25.113969 ignition[929]: INFO : mount: mount passed Jul 6 23:11:25.113969 ignition[929]: INFO : Ignition finished successfully Jul 6 23:11:25.115075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:11:25.125710 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:11:25.240762 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:11:25.253839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:11:25.265565 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (941) Jul 6 23:11:25.266888 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:11:25.266933 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:11:25.266956 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:11:25.270569 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:11:25.270622 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:11:25.274139 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:11:25.296951 ignition[958]: INFO : Ignition 2.20.0 Jul 6 23:11:25.296951 ignition[958]: INFO : Stage: files Jul 6 23:11:25.298207 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:25.298207 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:25.300663 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:11:25.300663 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:11:25.300663 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:11:25.305989 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:11:25.305989 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:11:25.305989 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:11:25.304729 unknown[958]: wrote ssh authorized keys file for user: core Jul 6 23:11:25.310069 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:11:25.310069 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 6 23:11:25.427650 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:11:25.472105 systemd-networkd[776]: eth1: Gained IPv6LL Jul 6 23:11:25.623615 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:11:25.623615 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:11:25.626935 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:11:25.791715 systemd-networkd[776]: eth0: Gained IPv6LL Jul 6 23:11:26.209581 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:11:26.283546 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:11:26.285124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 6 23:11:26.883212 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:11:27.060021 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:11:27.060021 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:11:27.063736 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:11:27.063736 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:11:27.063736 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:11:27.063736 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:11:27.068893 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:11:27.068893 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:11:27.068893 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:11:27.068893 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:11:27.068893 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:11:27.068893 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:11:27.068893 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:11:27.068893 ignition[958]: INFO : files: files passed Jul 6 23:11:27.068893 ignition[958]: INFO : Ignition finished successfully Jul 6 23:11:27.070454 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:11:27.077402 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:11:27.082412 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:11:27.085669 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:11:27.085779 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:11:27.103097 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:11:27.103097 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:11:27.105464 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:11:27.108089 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:11:27.109077 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:11:27.114824 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:11:27.154611 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:11:27.154751 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:11:27.156426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:11:27.157433 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:11:27.158490 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:11:27.159972 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:11:27.191975 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:11:27.200056 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:11:27.212200 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:11:27.214058 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:11:27.215858 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:11:27.216913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:11:27.217116 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:11:27.219078 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:11:27.220730 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:11:27.222103 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:11:27.223203 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:11:27.224318 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:11:27.225310 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:11:27.226212 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:11:27.227408 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:11:27.228438 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:11:27.229406 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:11:27.230208 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:11:27.230388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:11:27.231548 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:11:27.232671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:11:27.233646 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:11:27.233756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:11:27.234741 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:11:27.234984 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:11:27.236295 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:11:27.236461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:11:27.237401 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:11:27.237575 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:11:27.238276 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:11:27.238420 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:11:27.243895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:11:27.244438 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:11:27.244637 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:11:27.249700 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:11:27.250269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:11:27.250409 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:11:27.253723 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:11:27.253855 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:11:27.263977 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:11:27.265829 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:11:27.267157 ignition[1010]: INFO : Ignition 2.20.0 Jul 6 23:11:27.267157 ignition[1010]: INFO : Stage: umount Jul 6 23:11:27.267157 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:11:27.267157 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:11:27.271083 ignition[1010]: INFO : umount: umount passed Jul 6 23:11:27.271083 ignition[1010]: INFO : Ignition finished successfully Jul 6 23:11:27.269055 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:11:27.269186 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:11:27.271997 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:11:27.272050 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:11:27.273724 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:11:27.273776 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:11:27.275486 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:11:27.275549 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:11:27.276939 systemd[1]: Stopped target network.target - Network. Jul 6 23:11:27.278717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:11:27.278780 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:11:27.279776 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:11:27.281016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:11:27.285541 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:11:27.286379 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:11:27.287677 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:11:27.290881 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:11:27.290927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:11:27.292175 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:11:27.292210 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:11:27.293121 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:11:27.293212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:11:27.294081 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:11:27.294120 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:11:27.296912 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:11:27.297825 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:11:27.306813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:11:27.307688 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:11:27.307836 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:11:27.312476 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:11:27.313113 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:11:27.313600 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:11:27.322367 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:11:27.324835 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:11:27.325097 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:11:27.329773 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:11:27.330490 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:11:27.330578 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:11:27.341739 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:11:27.342260 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:11:27.342333 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:11:27.344400 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:11:27.344468 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:11:27.347749 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:11:27.347810 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:11:27.349035 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:11:27.351598 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:11:27.351997 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:11:27.353005 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:11:27.361868 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:11:27.362115 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:11:27.366047 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:11:27.366762 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:11:27.369483 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:11:27.369676 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:11:27.371626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:11:27.371669 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:11:27.374619 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:11:27.374690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:11:27.376676 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:11:27.376781 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:11:27.379037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:11:27.379089 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:11:27.380719 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:11:27.380771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:11:27.390686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:11:27.391295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:11:27.391454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:11:27.395462 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:11:27.395531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:11:27.399097 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:11:27.399251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:11:27.400736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:11:27.405783 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:11:27.418166 systemd[1]: Switching root. Jul 6 23:11:27.456642 systemd-journald[236]: Journal stopped Jul 6 23:11:28.513877 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jul 6 23:11:28.513941 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:11:28.513957 kernel: SELinux: policy capability open_perms=1 Jul 6 23:11:28.513967 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:11:28.513976 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:11:28.513988 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:11:28.514001 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:11:28.514014 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:11:28.514024 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:11:28.514033 kernel: audit: type=1403 audit(1751843487.582:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:11:28.514047 systemd[1]: Successfully loaded SELinux policy in 42.027ms. Jul 6 23:11:28.514063 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.006ms. Jul 6 23:11:28.514074 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:11:28.514085 systemd[1]: Detected virtualization kvm. Jul 6 23:11:28.514095 systemd[1]: Detected architecture arm64. Jul 6 23:11:28.514106 systemd[1]: Detected first boot. Jul 6 23:11:28.514117 systemd[1]: Hostname set to . Jul 6 23:11:28.514170 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:11:28.514185 zram_generator::config[1055]: No configuration found. Jul 6 23:11:28.514199 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:11:28.514209 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:11:28.514220 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:11:28.514231 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:11:28.514241 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:11:28.514255 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:11:28.514265 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:11:28.514275 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:11:28.514285 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:11:28.514296 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:11:28.514307 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:11:28.514318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:11:28.514330 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:11:28.514342 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:11:28.514355 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:11:28.514366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:11:28.514379 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:11:28.514389 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:11:28.514400 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:11:28.514411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:11:28.514421 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:11:28.514433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:11:28.514443 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:11:28.514453 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:11:28.514463 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:11:28.514474 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:11:28.514484 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:11:28.514495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:11:28.514506 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:11:28.516245 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:11:28.516265 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:11:28.516276 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:11:28.516287 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:11:28.516297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:11:28.516307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:11:28.516318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:11:28.516330 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:11:28.516344 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:11:28.516356 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:11:28.516366 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:11:28.516376 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:11:28.516386 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:11:28.516397 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:11:28.516410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:11:28.516423 systemd[1]: Reached target machines.target - Containers. Jul 6 23:11:28.516435 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:11:28.516446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:11:28.516457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:11:28.516467 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:11:28.516477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:11:28.516488 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:11:28.516498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:11:28.516530 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:11:28.516541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:11:28.516552 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:11:28.516562 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:11:28.516572 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:11:28.516582 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:11:28.516592 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:11:28.516603 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:11:28.516615 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:11:28.516625 kernel: fuse: init (API version 7.39) Jul 6 23:11:28.516636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:11:28.516647 kernel: ACPI: bus type drm_connector registered Jul 6 23:11:28.516656 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:11:28.516668 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:11:28.516678 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:11:28.516689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:11:28.516700 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:11:28.516710 systemd[1]: Stopped verity-setup.service. Jul 6 23:11:28.516720 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:11:28.516730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:11:28.516740 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:11:28.516752 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:11:28.516764 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:11:28.516774 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:11:28.516785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:11:28.516797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:11:28.516808 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:11:28.516819 kernel: loop: module loaded Jul 6 23:11:28.516829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:11:28.516839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:11:28.516849 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:11:28.516859 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:11:28.516869 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:11:28.516880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:11:28.516890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:11:28.516922 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:11:28.516938 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:11:28.516949 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:11:28.516960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:11:28.516972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:11:28.516983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:11:28.516994 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:11:28.517005 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:11:28.517057 systemd-journald[1126]: Collecting audit messages is disabled. Jul 6 23:11:28.517089 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:11:28.517100 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:11:28.517111 systemd-journald[1126]: Journal started Jul 6 23:11:28.517180 systemd-journald[1126]: Runtime Journal (/run/log/journal/025bf8325ef44b99bb80bc4e0f3c7eaa) is 8M, max 76.6M, 68.6M free. Jul 6 23:11:28.191700 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:11:28.204085 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:11:28.204799 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:11:28.525648 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:11:28.525749 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:11:28.525772 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:11:28.533546 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:11:28.536748 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:11:28.538528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:11:28.549561 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:11:28.549647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:11:28.558953 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:11:28.560548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:11:28.570551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:11:28.577535 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:11:28.585772 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:11:28.595330 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:11:28.594548 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:11:28.596980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:11:28.599249 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:11:28.601813 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:11:28.604892 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:11:28.606910 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:11:28.617950 kernel: loop0: detected capacity change from 0 to 113512 Jul 6 23:11:28.639281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:11:28.651186 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:11:28.664543 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:11:28.662751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:11:28.669107 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:11:28.675967 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:11:28.690912 systemd-journald[1126]: Time spent on flushing to /var/log/journal/025bf8325ef44b99bb80bc4e0f3c7eaa is 71.207ms for 1147 entries. Jul 6 23:11:28.690912 systemd-journald[1126]: System Journal (/var/log/journal/025bf8325ef44b99bb80bc4e0f3c7eaa) is 8M, max 584.8M, 576.8M free. Jul 6 23:11:28.787531 systemd-journald[1126]: Received client request to flush runtime journal. Jul 6 23:11:28.787612 kernel: loop1: detected capacity change from 0 to 207008 Jul 6 23:11:28.712577 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:11:28.790815 kernel: loop2: detected capacity change from 0 to 123192 Jul 6 23:11:28.726871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:11:28.746836 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:11:28.756805 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:11:28.792888 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jul 6 23:11:28.792899 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jul 6 23:11:28.794282 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:11:28.808547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:11:28.835560 kernel: loop3: detected capacity change from 0 to 8 Jul 6 23:11:28.858876 kernel: loop4: detected capacity change from 0 to 113512 Jul 6 23:11:28.875696 kernel: loop5: detected capacity change from 0 to 207008 Jul 6 23:11:28.910566 kernel: loop6: detected capacity change from 0 to 123192 Jul 6 23:11:28.933297 kernel: loop7: detected capacity change from 0 to 8 Jul 6 23:11:28.934017 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 6 23:11:28.934944 (sd-merge)[1201]: Merged extensions into '/usr'. Jul 6 23:11:28.942547 systemd[1]: Reload requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:11:28.943267 systemd[1]: Reloading... Jul 6 23:11:29.033532 zram_generator::config[1230]: No configuration found. Jul 6 23:11:29.149701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:11:29.166374 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:11:29.213384 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:11:29.214048 systemd[1]: Reloading finished in 268 ms. Jul 6 23:11:29.255538 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:11:29.256601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:11:29.275780 systemd[1]: Starting ensure-sysext.service... Jul 6 23:11:29.286555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:11:29.304747 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:11:29.304762 systemd[1]: Reloading... Jul 6 23:11:29.335473 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:11:29.336399 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:11:29.337155 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:11:29.337387 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 6 23:11:29.337434 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 6 23:11:29.350978 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:11:29.350989 systemd-tmpfiles[1268]: Skipping /boot Jul 6 23:11:29.372205 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:11:29.372222 systemd-tmpfiles[1268]: Skipping /boot Jul 6 23:11:29.402547 zram_generator::config[1297]: No configuration found. Jul 6 23:11:29.522871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:11:29.587663 systemd[1]: Reloading finished in 282 ms. Jul 6 23:11:29.599605 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:11:29.613593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:11:29.635314 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:11:29.641180 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:11:29.642397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:11:29.648827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:11:29.653955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:11:29.661816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:11:29.662699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:11:29.662830 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:11:29.673798 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:11:29.687669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:11:29.691094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:11:29.695604 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:11:29.699776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:11:29.701638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:11:29.703096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:11:29.703333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:11:29.706091 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:11:29.706302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:11:29.713947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:11:29.714227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:11:29.719690 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:11:29.723955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:11:29.730847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:11:29.738326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:11:29.744797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:11:29.745993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:11:29.746168 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:11:29.749608 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:11:29.750858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:11:29.751047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:11:29.766204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:11:29.776182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:11:29.787830 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:11:29.789335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:11:29.789564 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:11:29.794943 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:11:29.798339 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:11:29.805430 augenrules[1375]: No rules Jul 6 23:11:29.807564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:11:29.807818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:11:29.810713 systemd[1]: Finished ensure-sysext.service. Jul 6 23:11:29.812614 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:11:29.812817 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:11:29.815180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:11:29.815365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:11:29.830546 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:11:29.834629 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:11:29.835830 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jul 6 23:11:29.836024 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:11:29.839619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:11:29.841111 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:11:29.842611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:11:29.854348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:11:29.854403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:11:29.862752 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:11:29.865109 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:11:29.868195 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:11:29.885686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:11:29.894792 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:11:29.951653 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:11:29.952492 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:11:29.967189 systemd-resolved[1353]: Positive Trust Anchors: Jul 6 23:11:29.967215 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:11:29.967247 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:11:29.975817 systemd-resolved[1353]: Using system hostname 'ci-4230-2-1-3-0a35d13a56'. Jul 6 23:11:29.979455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:11:29.980251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:11:30.010881 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:11:30.038041 systemd-networkd[1397]: lo: Link UP Jul 6 23:11:30.038050 systemd-networkd[1397]: lo: Gained carrier Jul 6 23:11:30.039620 systemd-networkd[1397]: Enumeration completed Jul 6 23:11:30.039749 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:11:30.040499 systemd[1]: Reached target network.target - Network. Jul 6 23:11:30.051879 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:11:30.054721 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:11:30.081965 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:11:30.104037 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:30.104049 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:11:30.105389 systemd-networkd[1397]: eth0: Link UP Jul 6 23:11:30.105401 systemd-networkd[1397]: eth0: Gained carrier Jul 6 23:11:30.105420 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:30.106526 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:11:30.133992 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:30.134008 systemd-networkd[1397]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:11:30.135477 systemd-networkd[1397]: eth1: Link UP Jul 6 23:11:30.135492 systemd-networkd[1397]: eth1: Gained carrier Jul 6 23:11:30.135525 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:11:30.157545 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1399) Jul 6 23:11:30.157741 systemd-networkd[1397]: eth0: DHCPv4 address 49.13.31.190/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:11:30.162592 systemd-networkd[1397]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:11:30.163405 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jul 6 23:11:30.235924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:11:30.246293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:11:30.272751 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:11:30.286391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:11:30.293908 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 6 23:11:30.295782 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 6 23:11:30.295875 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:11:30.295894 kernel: [drm] features: -context_init Jul 6 23:11:30.296659 kernel: [drm] number of scanouts: 1 Jul 6 23:11:30.297555 kernel: [drm] number of cap sets: 0 Jul 6 23:11:30.299406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:11:30.299554 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 6 23:11:30.305682 kernel: Console: switching to colour frame buffer device 160x50 Jul 6 23:11:30.307689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:11:30.309539 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:11:30.317780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:11:30.320685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:11:30.321752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:11:30.321804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:11:30.321829 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:11:30.329395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:11:30.329611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:11:30.333869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:11:30.334716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:11:30.335730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:11:30.335897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:11:30.342375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:11:30.342561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:11:30.345261 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:11:30.345591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:11:30.348142 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:11:30.354063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:11:30.419274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:11:30.439918 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:11:30.447855 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:11:30.463612 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:11:30.493704 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:11:30.495542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:11:30.496155 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:11:30.496898 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:11:30.497684 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:11:30.498577 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:11:30.499336 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:11:30.500089 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:11:30.500838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:11:30.500873 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:11:30.501375 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:11:30.503400 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:11:30.505703 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:11:30.509374 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:11:30.510409 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:11:30.511106 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:11:30.514643 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:11:30.515831 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:11:30.518300 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:11:30.519691 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:11:30.520458 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:11:30.521000 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:11:30.521609 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:11:30.521636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:11:30.527697 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:11:30.533005 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:11:30.535095 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:11:30.538575 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:11:30.548773 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:11:30.553911 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:11:30.554805 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:11:30.558772 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:11:30.562055 jq[1473]: false Jul 6 23:11:30.564804 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:11:30.568833 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 6 23:11:30.576768 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:11:30.579899 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:11:30.588851 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:11:30.590778 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:11:30.591423 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:11:30.594348 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:11:30.599785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:11:30.603249 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:11:30.604765 dbus-daemon[1472]: [system] SELinux support is enabled Jul 6 23:11:30.610096 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:11:30.614211 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:11:30.615013 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:11:30.623655 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:11:30.623697 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:11:30.626872 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:11:30.626905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:11:30.644149 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:11:30.645802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:11:30.647695 coreos-metadata[1471]: Jul 06 23:11:30.643 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 6 23:11:30.657894 coreos-metadata[1471]: Jul 06 23:11:30.657 INFO Fetch successful Jul 6 23:11:30.658367 coreos-metadata[1471]: Jul 06 23:11:30.658 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 6 23:11:30.665797 jq[1486]: true Jul 6 23:11:30.666073 coreos-metadata[1471]: Jul 06 23:11:30.662 INFO Fetch successful Jul 6 23:11:30.677258 update_engine[1485]: I20250706 23:11:30.676673 1485 main.cc:92] Flatcar Update Engine starting Jul 6 23:11:30.683019 update_engine[1485]: I20250706 23:11:30.680842 1485 update_check_scheduler.cc:74] Next update check in 6m51s Jul 6 23:11:30.684014 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:11:30.685599 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:11:30.686645 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:11:30.692503 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:11:30.694799 tar[1489]: linux-arm64/LICENSE Jul 6 23:11:30.694799 tar[1489]: linux-arm64/helm Jul 6 23:11:30.703028 jq[1507]: true Jul 6 23:11:30.703987 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:11:30.714310 extend-filesystems[1476]: Found loop4 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found loop5 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found loop6 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found loop7 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda1 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda2 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda3 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found usr Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda4 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda6 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda7 Jul 6 23:11:30.714310 extend-filesystems[1476]: Found sda9 Jul 6 23:11:30.714310 extend-filesystems[1476]: Checking size of /dev/sda9 Jul 6 23:11:30.774109 extend-filesystems[1476]: Resized partition /dev/sda9 Jul 6 23:11:30.786648 extend-filesystems[1529]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:11:30.801164 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 6 23:11:30.856557 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:11:30.858081 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:11:30.871885 systemd-logind[1483]: New seat seat0. Jul 6 23:11:30.891101 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:11:30.891150 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 6 23:11:30.892653 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:11:30.915189 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1413) Jul 6 23:11:30.915322 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:11:30.919250 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:11:30.930895 systemd[1]: Starting sshkeys.service... Jul 6 23:11:30.985362 containerd[1504]: time="2025-07-06T23:11:30.984718240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:11:30.989941 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:11:30.993847 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 6 23:11:30.998157 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:11:31.025320 coreos-metadata[1554]: Jul 06 23:11:31.021 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 6 23:11:31.025320 coreos-metadata[1554]: Jul 06 23:11:31.022 INFO Fetch successful Jul 6 23:11:31.024685 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:11:31.025839 extend-filesystems[1529]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 6 23:11:31.025839 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 6 23:11:31.025839 extend-filesystems[1529]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 6 23:11:31.024910 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:11:31.040717 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Jul 6 23:11:31.040717 extend-filesystems[1476]: Found sr0 Jul 6 23:11:31.034963 unknown[1554]: wrote ssh authorized keys file for user: core Jul 6 23:11:31.079019 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:11:31.084080 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:11:31.085390 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:11:31.093657 systemd[1]: Finished sshkeys.service. Jul 6 23:11:31.121032 containerd[1504]: time="2025-07-06T23:11:31.120911920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.127311 containerd[1504]: time="2025-07-06T23:11:31.127193280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:11:31.127451 containerd[1504]: time="2025-07-06T23:11:31.127434640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:11:31.127565 containerd[1504]: time="2025-07-06T23:11:31.127506080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:11:31.128735 containerd[1504]: time="2025-07-06T23:11:31.127827240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:11:31.128735 containerd[1504]: time="2025-07-06T23:11:31.128680200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.128970 containerd[1504]: time="2025-07-06T23:11:31.128946080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:11:31.129035 containerd[1504]: time="2025-07-06T23:11:31.129022880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.130618 containerd[1504]: time="2025-07-06T23:11:31.130584040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:11:31.130745 containerd[1504]: time="2025-07-06T23:11:31.130730000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.130842 containerd[1504]: time="2025-07-06T23:11:31.130826200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:11:31.130907 containerd[1504]: time="2025-07-06T23:11:31.130894320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.131783 containerd[1504]: time="2025-07-06T23:11:31.131589800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.132024 containerd[1504]: time="2025-07-06T23:11:31.132000200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:11:31.134207 containerd[1504]: time="2025-07-06T23:11:31.133789560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:11:31.134207 containerd[1504]: time="2025-07-06T23:11:31.133816440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:11:31.134207 containerd[1504]: time="2025-07-06T23:11:31.134091440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:11:31.134539 containerd[1504]: time="2025-07-06T23:11:31.134505560Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:11:31.143975 containerd[1504]: time="2025-07-06T23:11:31.143770480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:11:31.143975 containerd[1504]: time="2025-07-06T23:11:31.143848280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:11:31.143975 containerd[1504]: time="2025-07-06T23:11:31.143864840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:11:31.143975 containerd[1504]: time="2025-07-06T23:11:31.143881920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:11:31.143975 containerd[1504]: time="2025-07-06T23:11:31.143898760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.144497920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.144850600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145011120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145028600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145045600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145061680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145078120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145092000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145107560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145204320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145219920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145234400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145246800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:11:31.144745 containerd[1504]: time="2025-07-06T23:11:31.145269080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145283360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145295360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145308160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145321280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145336880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145348160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145362200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145375760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145392080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145404240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145415880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145535640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145555640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145577880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.146839 containerd[1504]: time="2025-07-06T23:11:31.145596000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.147073 containerd[1504]: time="2025-07-06T23:11:31.145608760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:11:31.149677 containerd[1504]: time="2025-07-06T23:11:31.149561960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:11:31.149677 containerd[1504]: time="2025-07-06T23:11:31.149619080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:11:31.149677 containerd[1504]: time="2025-07-06T23:11:31.149631520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:11:31.149677 containerd[1504]: time="2025-07-06T23:11:31.149647400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:11:31.149677 containerd[1504]: time="2025-07-06T23:11:31.149657200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.149677 containerd[1504]: time="2025-07-06T23:11:31.149680160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:11:31.149849 containerd[1504]: time="2025-07-06T23:11:31.149691680Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:11:31.149849 containerd[1504]: time="2025-07-06T23:11:31.149709040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:11:31.150139 containerd[1504]: time="2025-07-06T23:11:31.150056240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:11:31.150256 containerd[1504]: time="2025-07-06T23:11:31.150155360Z" level=info msg="Connect containerd service" Jul 6 23:11:31.154929 containerd[1504]: time="2025-07-06T23:11:31.154876400Z" level=info msg="using legacy CRI server" Jul 6 23:11:31.154929 containerd[1504]: time="2025-07-06T23:11:31.154922560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:11:31.155246 containerd[1504]: time="2025-07-06T23:11:31.155225080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:11:31.158887 containerd[1504]: time="2025-07-06T23:11:31.158837600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159042080Z" level=info msg="Start subscribing containerd event" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159101320Z" level=info msg="Start recovering state" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159191520Z" level=info msg="Start event monitor" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159204600Z" level=info msg="Start snapshots syncer" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159216080Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159224200Z" level=info msg="Start streaming server" Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159404400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159443560Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:11:31.161537 containerd[1504]: time="2025-07-06T23:11:31.159496120Z" level=info msg="containerd successfully booted in 0.181876s" Jul 6 23:11:31.160657 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:11:31.468773 tar[1489]: linux-arm64/README.md Jul 6 23:11:31.482634 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:11:31.807874 systemd-networkd[1397]: eth1: Gained IPv6LL Jul 6 23:11:31.808685 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jul 6 23:11:31.818743 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:11:31.820727 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:11:31.830726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:11:31.835642 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:11:31.879824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:11:32.027199 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:11:32.050421 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:11:32.057929 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:11:32.066879 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:11:32.067153 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:11:32.078086 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:11:32.096093 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:11:32.114093 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:11:32.118862 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:11:32.121792 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:11:32.128767 systemd-networkd[1397]: eth0: Gained IPv6LL Jul 6 23:11:32.129629 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jul 6 23:11:32.721971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:11:32.724363 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:11:32.729989 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:11:32.730312 systemd[1]: Startup finished in 824ms (kernel) + 5.883s (initrd) + 5.190s (userspace) = 11.898s. Jul 6 23:11:33.292198 kubelet[1601]: E0706 23:11:33.292119 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:11:33.297019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:11:33.297327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:11:33.298033 systemd[1]: kubelet.service: Consumed 950ms CPU time, 257.7M memory peak. Jul 6 23:11:33.962800 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:11:33.967971 systemd[1]: Started sshd@0-49.13.31.190:22-212.41.6.98:42102.service - OpenSSH per-connection server daemon (212.41.6.98:42102). Jul 6 23:11:34.259578 sshd[1614]: Connection closed by authenticating user root 212.41.6.98 port 42102 [preauth] Jul 6 23:11:34.264745 systemd[1]: sshd@0-49.13.31.190:22-212.41.6.98:42102.service: Deactivated successfully. Jul 6 23:11:43.548176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:11:43.555834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:11:43.682781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:11:43.682913 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:11:43.730898 kubelet[1626]: E0706 23:11:43.730835 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:11:43.734454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:11:43.734654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:11:43.735288 systemd[1]: kubelet.service: Consumed 156ms CPU time, 109.1M memory peak. Jul 6 23:11:53.985945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:11:53.993052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:11:54.112804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:11:54.129170 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:11:54.184099 kubelet[1640]: E0706 23:11:54.184022 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:11:54.188047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:11:54.188443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:11:54.190635 systemd[1]: kubelet.service: Consumed 166ms CPU time, 108.4M memory peak. Jul 6 23:12:02.226082 systemd-timesyncd[1393]: Contacted time server 109.123.244.54:123 (2.flatcar.pool.ntp.org). Jul 6 23:12:02.226155 systemd-timesyncd[1393]: Initial clock synchronization to Sun 2025-07-06 23:12:02.486297 UTC. Jul 6 23:12:04.200939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:12:04.213966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:04.352930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:04.353075 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:12:04.405494 kubelet[1655]: E0706 23:12:04.405409 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:12:04.408301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:12:04.408682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:12:04.409244 systemd[1]: kubelet.service: Consumed 154ms CPU time, 105.6M memory peak. Jul 6 23:12:14.451389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:12:14.459133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:14.522716 systemd[1]: Started sshd@1-49.13.31.190:22-139.178.89.65:38082.service - OpenSSH per-connection server daemon (139.178.89.65:38082). Jul 6 23:12:14.587770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:14.589239 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:12:14.642413 kubelet[1674]: E0706 23:12:14.642329 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:12:14.645349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:12:14.645708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:12:14.646506 systemd[1]: kubelet.service: Consumed 159ms CPU time, 104.7M memory peak. Jul 6 23:12:15.637059 sshd[1667]: Accepted publickey for core from 139.178.89.65 port 38082 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:15.639868 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:15.649955 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:12:15.656013 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:12:15.667210 systemd-logind[1483]: New session 1 of user core. Jul 6 23:12:15.672239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:12:15.678934 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:12:15.690463 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:12:15.694721 systemd-logind[1483]: New session c1 of user core. Jul 6 23:12:15.831903 systemd[1683]: Queued start job for default target default.target. Jul 6 23:12:15.837551 update_engine[1485]: I20250706 23:12:15.832413 1485 update_attempter.cc:509] Updating boot flags... Jul 6 23:12:15.838070 systemd[1683]: Created slice app.slice - User Application Slice. Jul 6 23:12:15.838121 systemd[1683]: Reached target paths.target - Paths. Jul 6 23:12:15.838212 systemd[1683]: Reached target timers.target - Timers. Jul 6 23:12:15.840790 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:12:15.865089 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:12:15.865320 systemd[1683]: Reached target sockets.target - Sockets. Jul 6 23:12:15.865366 systemd[1683]: Reached target basic.target - Basic System. Jul 6 23:12:15.865395 systemd[1683]: Reached target default.target - Main User Target. Jul 6 23:12:15.865421 systemd[1683]: Startup finished in 162ms. Jul 6 23:12:15.865959 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:12:15.881992 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1700) Jul 6 23:12:15.879206 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:12:15.956564 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1695) Jul 6 23:12:16.654182 systemd[1]: Started sshd@2-49.13.31.190:22-139.178.89.65:38096.service - OpenSSH per-connection server daemon (139.178.89.65:38096). Jul 6 23:12:17.725037 sshd[1712]: Accepted publickey for core from 139.178.89.65 port 38096 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:17.727259 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:17.734511 systemd-logind[1483]: New session 2 of user core. Jul 6 23:12:17.739862 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:12:18.462563 sshd[1714]: Connection closed by 139.178.89.65 port 38096 Jul 6 23:12:18.463929 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:18.469791 systemd[1]: sshd@2-49.13.31.190:22-139.178.89.65:38096.service: Deactivated successfully. Jul 6 23:12:18.474146 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:12:18.475608 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:12:18.478418 systemd-logind[1483]: Removed session 2. Jul 6 23:12:18.662058 systemd[1]: Started sshd@3-49.13.31.190:22-139.178.89.65:38112.service - OpenSSH per-connection server daemon (139.178.89.65:38112). Jul 6 23:12:19.751567 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 38112 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:19.753583 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:19.759772 systemd-logind[1483]: New session 3 of user core. Jul 6 23:12:19.766918 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:12:20.496677 sshd[1722]: Connection closed by 139.178.89.65 port 38112 Jul 6 23:12:20.497366 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:20.500962 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:12:20.501116 systemd[1]: sshd@3-49.13.31.190:22-139.178.89.65:38112.service: Deactivated successfully. Jul 6 23:12:20.503022 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:12:20.505319 systemd-logind[1483]: Removed session 3. Jul 6 23:12:20.690223 systemd[1]: Started sshd@4-49.13.31.190:22-139.178.89.65:48780.service - OpenSSH per-connection server daemon (139.178.89.65:48780). Jul 6 23:12:21.775270 sshd[1728]: Accepted publickey for core from 139.178.89.65 port 48780 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:21.778602 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:21.785784 systemd-logind[1483]: New session 4 of user core. Jul 6 23:12:21.795278 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:12:22.526667 sshd[1730]: Connection closed by 139.178.89.65 port 48780 Jul 6 23:12:22.527731 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:22.533577 systemd[1]: sshd@4-49.13.31.190:22-139.178.89.65:48780.service: Deactivated successfully. Jul 6 23:12:22.533635 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:12:22.535473 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:12:22.537110 systemd-logind[1483]: Removed session 4. Jul 6 23:12:22.716927 systemd[1]: Started sshd@5-49.13.31.190:22-139.178.89.65:48796.service - OpenSSH per-connection server daemon (139.178.89.65:48796). Jul 6 23:12:23.802127 sshd[1736]: Accepted publickey for core from 139.178.89.65 port 48796 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:23.804504 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:23.809961 systemd-logind[1483]: New session 5 of user core. Jul 6 23:12:23.819860 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:12:24.381077 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:12:24.381374 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:12:24.396560 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 6 23:12:24.572474 sshd[1738]: Connection closed by 139.178.89.65 port 48796 Jul 6 23:12:24.573689 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:24.578875 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:12:24.579556 systemd[1]: sshd@5-49.13.31.190:22-139.178.89.65:48796.service: Deactivated successfully. Jul 6 23:12:24.583270 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:12:24.585344 systemd-logind[1483]: Removed session 5. Jul 6 23:12:24.700998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 6 23:12:24.715088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:24.767865 systemd[1]: Started sshd@6-49.13.31.190:22-139.178.89.65:48798.service - OpenSSH per-connection server daemon (139.178.89.65:48798). Jul 6 23:12:24.844503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:24.858135 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:12:24.905039 kubelet[1755]: E0706 23:12:24.904968 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:12:24.908168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:12:24.908349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:12:24.908912 systemd[1]: kubelet.service: Consumed 153ms CPU time, 108.9M memory peak. Jul 6 23:12:25.860709 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 48798 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:25.863428 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:25.869680 systemd-logind[1483]: New session 6 of user core. Jul 6 23:12:25.881828 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:12:26.434494 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:12:26.434954 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:12:26.439359 sudo[1764]: pam_unix(sudo:session): session closed for user root Jul 6 23:12:26.445467 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:12:26.445775 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:12:26.461140 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:12:26.492717 augenrules[1786]: No rules Jul 6 23:12:26.494680 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:12:26.495111 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:12:26.497357 sudo[1763]: pam_unix(sudo:session): session closed for user root Jul 6 23:12:26.673603 sshd[1762]: Connection closed by 139.178.89.65 port 48798 Jul 6 23:12:26.674398 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:26.678466 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:12:26.680330 systemd[1]: sshd@6-49.13.31.190:22-139.178.89.65:48798.service: Deactivated successfully. Jul 6 23:12:26.682626 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:12:26.684020 systemd-logind[1483]: Removed session 6. Jul 6 23:12:26.871051 systemd[1]: Started sshd@7-49.13.31.190:22-139.178.89.65:48802.service - OpenSSH per-connection server daemon (139.178.89.65:48802). Jul 6 23:12:27.967088 sshd[1795]: Accepted publickey for core from 139.178.89.65 port 48802 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:27.968900 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:27.976848 systemd-logind[1483]: New session 7 of user core. Jul 6 23:12:27.983876 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:12:28.544993 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:12:28.545280 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:12:28.910930 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:12:28.912650 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:12:29.176284 dockerd[1815]: time="2025-07-06T23:12:29.175896038Z" level=info msg="Starting up" Jul 6 23:12:29.264646 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1859949669-merged.mount: Deactivated successfully. Jul 6 23:12:29.277681 systemd[1]: var-lib-docker-metacopy\x2dcheck3907743264-merged.mount: Deactivated successfully. Jul 6 23:12:29.289225 dockerd[1815]: time="2025-07-06T23:12:29.289177509Z" level=info msg="Loading containers: start." Jul 6 23:12:29.507553 kernel: Initializing XFRM netlink socket Jul 6 23:12:29.625635 systemd-networkd[1397]: docker0: Link UP Jul 6 23:12:29.656111 dockerd[1815]: time="2025-07-06T23:12:29.655955848Z" level=info msg="Loading containers: done." Jul 6 23:12:29.678952 dockerd[1815]: time="2025-07-06T23:12:29.678845970Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:12:29.679838 dockerd[1815]: time="2025-07-06T23:12:29.679695148Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:12:29.680373 dockerd[1815]: time="2025-07-06T23:12:29.680323383Z" level=info msg="Daemon has completed initialization" Jul 6 23:12:29.728563 dockerd[1815]: time="2025-07-06T23:12:29.728067932Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:12:29.729369 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:12:30.259337 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2428839375-merged.mount: Deactivated successfully. Jul 6 23:12:30.885951 containerd[1504]: time="2025-07-06T23:12:30.885539213Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:12:31.564055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185445359.mount: Deactivated successfully. Jul 6 23:12:33.291754 containerd[1504]: time="2025-07-06T23:12:33.291646567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:33.293919 containerd[1504]: time="2025-07-06T23:12:33.293819534Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328286" Jul 6 23:12:33.295409 containerd[1504]: time="2025-07-06T23:12:33.295354883Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:33.301464 containerd[1504]: time="2025-07-06T23:12:33.301019679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:33.302751 containerd[1504]: time="2025-07-06T23:12:33.302709080Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.417116263s" Jul 6 23:12:33.302855 containerd[1504]: time="2025-07-06T23:12:33.302755787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 6 23:12:33.303802 containerd[1504]: time="2025-07-06T23:12:33.303654039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:12:34.950470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 6 23:12:34.959326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:35.100870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:35.105777 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:12:35.195614 kubelet[2070]: E0706 23:12:35.195186 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:12:35.199556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:12:35.199728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:12:35.201647 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107.2M memory peak. Jul 6 23:12:35.324184 containerd[1504]: time="2025-07-06T23:12:35.323994477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:35.325745 containerd[1504]: time="2025-07-06T23:12:35.325663242Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529248" Jul 6 23:12:35.326856 containerd[1504]: time="2025-07-06T23:12:35.326782756Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:35.332541 containerd[1504]: time="2025-07-06T23:12:35.332017788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:35.334094 containerd[1504]: time="2025-07-06T23:12:35.333930620Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 2.030229473s" Jul 6 23:12:35.334094 containerd[1504]: time="2025-07-06T23:12:35.333987655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 6 23:12:35.335097 containerd[1504]: time="2025-07-06T23:12:35.334808349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:12:36.869678 containerd[1504]: time="2025-07-06T23:12:36.869618406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:36.870298 containerd[1504]: time="2025-07-06T23:12:36.870234636Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484161" Jul 6 23:12:36.872231 containerd[1504]: time="2025-07-06T23:12:36.872156850Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:36.876941 containerd[1504]: time="2025-07-06T23:12:36.876875293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:36.878712 containerd[1504]: time="2025-07-06T23:12:36.878651383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.543804651s" Jul 6 23:12:36.878712 containerd[1504]: time="2025-07-06T23:12:36.878707775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 6 23:12:36.879595 containerd[1504]: time="2025-07-06T23:12:36.879146545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:12:38.174305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550808294.mount: Deactivated successfully. Jul 6 23:12:38.486710 containerd[1504]: time="2025-07-06T23:12:38.486641209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:38.488799 containerd[1504]: time="2025-07-06T23:12:38.488749000Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378432" Jul 6 23:12:38.490668 containerd[1504]: time="2025-07-06T23:12:38.490633957Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:38.493499 containerd[1504]: time="2025-07-06T23:12:38.493453109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:38.495000 containerd[1504]: time="2025-07-06T23:12:38.494796712Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.615614667s" Jul 6 23:12:38.495000 containerd[1504]: time="2025-07-06T23:12:38.494855302Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 6 23:12:38.495886 containerd[1504]: time="2025-07-06T23:12:38.495618089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:12:39.182811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805150227.mount: Deactivated successfully. Jul 6 23:12:40.149543 containerd[1504]: time="2025-07-06T23:12:40.149349216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:40.151419 containerd[1504]: time="2025-07-06T23:12:40.151343403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jul 6 23:12:40.152393 containerd[1504]: time="2025-07-06T23:12:40.152323088Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:40.157853 containerd[1504]: time="2025-07-06T23:12:40.157779569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:40.159394 containerd[1504]: time="2025-07-06T23:12:40.159216862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.663266285s" Jul 6 23:12:40.159394 containerd[1504]: time="2025-07-06T23:12:40.159257601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:12:40.160297 containerd[1504]: time="2025-07-06T23:12:40.160099344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:12:40.691781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851046334.mount: Deactivated successfully. Jul 6 23:12:40.700201 containerd[1504]: time="2025-07-06T23:12:40.700094868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:40.701784 containerd[1504]: time="2025-07-06T23:12:40.701691233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 6 23:12:40.703566 containerd[1504]: time="2025-07-06T23:12:40.702709776Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:40.705948 containerd[1504]: time="2025-07-06T23:12:40.705900707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:40.707054 containerd[1504]: time="2025-07-06T23:12:40.707022817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 546.877533ms" Jul 6 23:12:40.707198 containerd[1504]: time="2025-07-06T23:12:40.707178568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:12:40.707988 containerd[1504]: time="2025-07-06T23:12:40.707944076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:12:41.419985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385150296.mount: Deactivated successfully. Jul 6 23:12:43.346023 containerd[1504]: time="2025-07-06T23:12:43.345927641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:43.348950 containerd[1504]: time="2025-07-06T23:12:43.348877380Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Jul 6 23:12:43.350408 containerd[1504]: time="2025-07-06T23:12:43.350341746Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:43.355551 containerd[1504]: time="2025-07-06T23:12:43.354003681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:12:43.356480 containerd[1504]: time="2025-07-06T23:12:43.356247868Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.648261974s" Jul 6 23:12:43.356480 containerd[1504]: time="2025-07-06T23:12:43.356297167Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 6 23:12:45.201168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 6 23:12:45.209817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:45.346429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:45.357131 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:12:45.403563 kubelet[2226]: E0706 23:12:45.403269 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:12:45.406280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:12:45.406433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:12:45.406813 systemd[1]: kubelet.service: Consumed 158ms CPU time, 104.8M memory peak. Jul 6 23:12:48.186328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:48.186992 systemd[1]: kubelet.service: Consumed 158ms CPU time, 104.8M memory peak. Jul 6 23:12:48.199371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:48.240068 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... Jul 6 23:12:48.240223 systemd[1]: Reloading... Jul 6 23:12:48.371596 zram_generator::config[2286]: No configuration found. Jul 6 23:12:48.489801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:12:48.584649 systemd[1]: Reloading finished in 344 ms. Jul 6 23:12:48.631359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:48.637694 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:48.638077 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:12:48.639589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:48.639679 systemd[1]: kubelet.service: Consumed 102ms CPU time, 94.9M memory peak. Jul 6 23:12:48.648453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:48.769700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:48.789190 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:12:48.838670 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:12:48.838670 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:12:48.838670 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:12:48.839052 kubelet[2336]: I0706 23:12:48.838755 2336 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:12:49.155467 kubelet[2336]: I0706 23:12:49.155315 2336 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:12:49.157651 kubelet[2336]: I0706 23:12:49.155922 2336 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:12:49.157651 kubelet[2336]: I0706 23:12:49.156892 2336 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:12:49.190240 kubelet[2336]: E0706 23:12:49.190198 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.31.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:49.193442 kubelet[2336]: I0706 23:12:49.193398 2336 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:12:49.201547 kubelet[2336]: E0706 23:12:49.201465 2336 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:12:49.201674 kubelet[2336]: I0706 23:12:49.201556 2336 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:12:49.205034 kubelet[2336]: I0706 23:12:49.204644 2336 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:12:49.205843 kubelet[2336]: I0706 23:12:49.205745 2336 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:12:49.206248 kubelet[2336]: I0706 23:12:49.205956 2336 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-3-0a35d13a56","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:12:49.206789 kubelet[2336]: I0706 23:12:49.206453 2336 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:12:49.206789 kubelet[2336]: I0706 23:12:49.206473 2336 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:12:49.206789 kubelet[2336]: I0706 23:12:49.206713 2336 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:12:49.211684 kubelet[2336]: I0706 23:12:49.211650 2336 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:12:49.211854 kubelet[2336]: I0706 23:12:49.211837 2336 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:12:49.211949 kubelet[2336]: I0706 23:12:49.211935 2336 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:12:49.212034 kubelet[2336]: I0706 23:12:49.212020 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:12:49.217016 kubelet[2336]: W0706 23:12:49.216749 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.31.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-0a35d13a56&limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:49.217016 kubelet[2336]: E0706 23:12:49.216868 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.31.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-0a35d13a56&limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:49.217144 kubelet[2336]: I0706 23:12:49.217024 2336 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:12:49.218561 kubelet[2336]: I0706 23:12:49.218435 2336 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:12:49.218634 kubelet[2336]: W0706 23:12:49.218592 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:12:49.221538 kubelet[2336]: I0706 23:12:49.220677 2336 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:12:49.221538 kubelet[2336]: I0706 23:12:49.220718 2336 server.go:1287] "Started kubelet" Jul 6 23:12:49.227662 kubelet[2336]: I0706 23:12:49.227623 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:12:49.233176 kubelet[2336]: I0706 23:12:49.233133 2336 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:12:49.233618 kubelet[2336]: E0706 23:12:49.233359 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.31.190:6443/api/v1/namespaces/default/events\": dial tcp 49.13.31.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-1-3-0a35d13a56.184fcc71cd73bfcd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-3-0a35d13a56,UID:ci-4230-2-1-3-0a35d13a56,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-3-0a35d13a56,},FirstTimestamp:2025-07-06 23:12:49.220698061 +0000 UTC m=+0.427355220,LastTimestamp:2025-07-06 23:12:49.220698061 +0000 UTC m=+0.427355220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-3-0a35d13a56,}" Jul 6 23:12:49.233882 kubelet[2336]: W0706 23:12:49.233840 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.31.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:49.233982 kubelet[2336]: E0706 23:12:49.233962 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.31.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:49.235090 kubelet[2336]: I0706 23:12:49.235058 2336 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:12:49.235388 kubelet[2336]: E0706 23:12:49.235358 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" Jul 6 23:12:49.235784 kubelet[2336]: I0706 23:12:49.235734 2336 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:12:49.236757 kubelet[2336]: I0706 23:12:49.236735 2336 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:12:49.237069 kubelet[2336]: I0706 23:12:49.237034 2336 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:12:49.237112 kubelet[2336]: I0706 23:12:49.237097 2336 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:12:49.238988 kubelet[2336]: W0706 23:12:49.238923 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.31.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:49.239070 kubelet[2336]: E0706 23:12:49.238989 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.31.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:49.239097 kubelet[2336]: E0706 23:12:49.239069 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.31.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-0a35d13a56?timeout=10s\": dial tcp 49.13.31.190:6443: connect: connection refused" interval="200ms" Jul 6 23:12:49.239977 kubelet[2336]: I0706 23:12:49.239220 2336 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:12:49.239977 kubelet[2336]: I0706 23:12:49.239336 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:12:49.239977 kubelet[2336]: I0706 23:12:49.239456 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:12:49.239977 kubelet[2336]: I0706 23:12:49.239724 2336 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:12:49.241196 kubelet[2336]: I0706 23:12:49.241171 2336 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:12:49.256668 kubelet[2336]: I0706 23:12:49.256620 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:12:49.258076 kubelet[2336]: I0706 23:12:49.258052 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:12:49.258191 kubelet[2336]: I0706 23:12:49.258180 2336 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:12:49.258269 kubelet[2336]: I0706 23:12:49.258258 2336 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:12:49.258637 kubelet[2336]: I0706 23:12:49.258328 2336 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:12:49.258637 kubelet[2336]: E0706 23:12:49.258383 2336 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:12:49.268752 kubelet[2336]: E0706 23:12:49.268717 2336 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:12:49.269664 kubelet[2336]: W0706 23:12:49.269329 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.31.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:49.269664 kubelet[2336]: E0706 23:12:49.269380 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.31.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:49.275908 kubelet[2336]: I0706 23:12:49.275887 2336 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:12:49.276350 kubelet[2336]: I0706 23:12:49.276000 2336 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:12:49.276350 kubelet[2336]: I0706 23:12:49.276038 2336 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:12:49.279058 kubelet[2336]: I0706 23:12:49.278771 2336 policy_none.go:49] "None policy: Start" Jul 6 23:12:49.279058 kubelet[2336]: I0706 23:12:49.278807 2336 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:12:49.279058 kubelet[2336]: I0706 23:12:49.278819 2336 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:12:49.286699 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:12:49.307602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:12:49.311783 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:12:49.324624 kubelet[2336]: I0706 23:12:49.323666 2336 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:12:49.324624 kubelet[2336]: I0706 23:12:49.324066 2336 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:12:49.324624 kubelet[2336]: I0706 23:12:49.324091 2336 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:12:49.324624 kubelet[2336]: I0706 23:12:49.324548 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:12:49.326309 kubelet[2336]: E0706 23:12:49.326288 2336 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:12:49.326456 kubelet[2336]: E0706 23:12:49.326444 2336 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-1-3-0a35d13a56\" not found" Jul 6 23:12:49.373869 systemd[1]: Created slice kubepods-burstable-pod92f9feb38e2b75a82349814e7923f075.slice - libcontainer container kubepods-burstable-pod92f9feb38e2b75a82349814e7923f075.slice. Jul 6 23:12:49.392741 kubelet[2336]: E0706 23:12:49.392675 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.399411 systemd[1]: Created slice kubepods-burstable-pod87a414a1848cebd50fd2e293d48f0f5f.slice - libcontainer container kubepods-burstable-pod87a414a1848cebd50fd2e293d48f0f5f.slice. Jul 6 23:12:49.410295 kubelet[2336]: E0706 23:12:49.409360 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.416178 systemd[1]: Created slice kubepods-burstable-pod399a78aab121c00ba879ae339058e519.slice - libcontainer container kubepods-burstable-pod399a78aab121c00ba879ae339058e519.slice. Jul 6 23:12:49.418426 kubelet[2336]: E0706 23:12:49.418231 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.428229 kubelet[2336]: I0706 23:12:49.427299 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.428459 kubelet[2336]: E0706 23:12:49.428346 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.31.190:6443/api/v1/nodes\": dial tcp 49.13.31.190:6443: connect: connection refused" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.438283 kubelet[2336]: I0706 23:12:49.438202 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/399a78aab121c00ba879ae339058e519-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-3-0a35d13a56\" (UID: \"399a78aab121c00ba879ae339058e519\") " pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.438283 kubelet[2336]: I0706 23:12:49.438262 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f9feb38e2b75a82349814e7923f075-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" (UID: \"92f9feb38e2b75a82349814e7923f075\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.438889 kubelet[2336]: I0706 23:12:49.438738 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.438889 kubelet[2336]: I0706 23:12:49.438776 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.438889 kubelet[2336]: I0706 23:12:49.438807 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.438889 kubelet[2336]: I0706 23:12:49.438875 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f9feb38e2b75a82349814e7923f075-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" (UID: \"92f9feb38e2b75a82349814e7923f075\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.439422 kubelet[2336]: I0706 23:12:49.438909 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f9feb38e2b75a82349814e7923f075-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" (UID: \"92f9feb38e2b75a82349814e7923f075\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.439422 kubelet[2336]: I0706 23:12:49.438938 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.439422 kubelet[2336]: I0706 23:12:49.438964 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.442541 kubelet[2336]: E0706 23:12:49.441544 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.31.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-0a35d13a56?timeout=10s\": dial tcp 49.13.31.190:6443: connect: connection refused" interval="400ms" Jul 6 23:12:49.631094 kubelet[2336]: I0706 23:12:49.631026 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.631568 kubelet[2336]: E0706 23:12:49.631419 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.31.190:6443/api/v1/nodes\": dial tcp 49.13.31.190:6443: connect: connection refused" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:49.694876 containerd[1504]: time="2025-07-06T23:12:49.694721891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-3-0a35d13a56,Uid:92f9feb38e2b75a82349814e7923f075,Namespace:kube-system,Attempt:0,}" Jul 6 23:12:49.711961 containerd[1504]: time="2025-07-06T23:12:49.711848512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-3-0a35d13a56,Uid:87a414a1848cebd50fd2e293d48f0f5f,Namespace:kube-system,Attempt:0,}" Jul 6 23:12:49.724990 containerd[1504]: time="2025-07-06T23:12:49.724860166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-3-0a35d13a56,Uid:399a78aab121c00ba879ae339058e519,Namespace:kube-system,Attempt:0,}" Jul 6 23:12:49.842876 kubelet[2336]: E0706 23:12:49.842805 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.31.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-0a35d13a56?timeout=10s\": dial tcp 49.13.31.190:6443: connect: connection refused" interval="800ms" Jul 6 23:12:50.033563 kubelet[2336]: I0706 23:12:50.033447 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:50.034038 kubelet[2336]: E0706 23:12:50.033923 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.31.190:6443/api/v1/nodes\": dial tcp 49.13.31.190:6443: connect: connection refused" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:50.104935 kubelet[2336]: W0706 23:12:50.104840 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.31.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:50.105086 kubelet[2336]: E0706 23:12:50.104941 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.31.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:50.145176 kubelet[2336]: W0706 23:12:50.145054 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.31.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-0a35d13a56&limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:50.145176 kubelet[2336]: E0706 23:12:50.145154 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.31.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-0a35d13a56&limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:50.253785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455456122.mount: Deactivated successfully. Jul 6 23:12:50.262099 containerd[1504]: time="2025-07-06T23:12:50.261709639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:12:50.263851 containerd[1504]: time="2025-07-06T23:12:50.263779159Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:12:50.266173 containerd[1504]: time="2025-07-06T23:12:50.266113870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 6 23:12:50.266941 containerd[1504]: time="2025-07-06T23:12:50.266885638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:12:50.269092 containerd[1504]: time="2025-07-06T23:12:50.268998529Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:12:50.270472 containerd[1504]: time="2025-07-06T23:12:50.270317486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:12:50.270472 containerd[1504]: time="2025-07-06T23:12:50.270366619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:12:50.275500 containerd[1504]: time="2025-07-06T23:12:50.275443031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:12:50.279543 containerd[1504]: time="2025-07-06T23:12:50.277581689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.577243ms" Jul 6 23:12:50.281317 containerd[1504]: time="2025-07-06T23:12:50.281269366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.456649ms" Jul 6 23:12:50.323919 containerd[1504]: time="2025-07-06T23:12:50.323452688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 611.473499ms" Jul 6 23:12:50.392336 containerd[1504]: time="2025-07-06T23:12:50.392127090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:12:50.392336 containerd[1504]: time="2025-07-06T23:12:50.392217835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:12:50.392336 containerd[1504]: time="2025-07-06T23:12:50.392275170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:12:50.392773 containerd[1504]: time="2025-07-06T23:12:50.392450418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:12:50.396400 containerd[1504]: time="2025-07-06T23:12:50.396266609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:12:50.396400 containerd[1504]: time="2025-07-06T23:12:50.396354673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:12:50.396662 containerd[1504]: time="2025-07-06T23:12:50.396371918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:12:50.397640 containerd[1504]: time="2025-07-06T23:12:50.397558398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:12:50.401074 containerd[1504]: time="2025-07-06T23:12:50.400966800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:12:50.401074 containerd[1504]: time="2025-07-06T23:12:50.401043620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:12:50.401409 containerd[1504]: time="2025-07-06T23:12:50.401059985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:12:50.402920 containerd[1504]: time="2025-07-06T23:12:50.402725675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:12:50.432846 systemd[1]: Started cri-containerd-317dec190ae945be8fcd6a0dc254687f91010eda22a949228173f59aca24d5d3.scope - libcontainer container 317dec190ae945be8fcd6a0dc254687f91010eda22a949228173f59aca24d5d3. Jul 6 23:12:50.436481 systemd[1]: Started cri-containerd-31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94.scope - libcontainer container 31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94. Jul 6 23:12:50.440724 systemd[1]: Started cri-containerd-61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3.scope - libcontainer container 61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3. Jul 6 23:12:50.499872 containerd[1504]: time="2025-07-06T23:12:50.499725973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-3-0a35d13a56,Uid:87a414a1848cebd50fd2e293d48f0f5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94\"" Jul 6 23:12:50.507433 containerd[1504]: time="2025-07-06T23:12:50.507370600Z" level=info msg="CreateContainer within sandbox \"31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:12:50.516948 containerd[1504]: time="2025-07-06T23:12:50.516878810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-3-0a35d13a56,Uid:92f9feb38e2b75a82349814e7923f075,Namespace:kube-system,Attempt:0,} returns sandbox id \"317dec190ae945be8fcd6a0dc254687f91010eda22a949228173f59aca24d5d3\"" Jul 6 23:12:50.521563 containerd[1504]: time="2025-07-06T23:12:50.521523825Z" level=info msg="CreateContainer within sandbox \"317dec190ae945be8fcd6a0dc254687f91010eda22a949228173f59aca24d5d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:12:50.530348 containerd[1504]: time="2025-07-06T23:12:50.530116268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-3-0a35d13a56,Uid:399a78aab121c00ba879ae339058e519,Namespace:kube-system,Attempt:0,} returns sandbox id \"61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3\"" Jul 6 23:12:50.535205 containerd[1504]: time="2025-07-06T23:12:50.535161591Z" level=info msg="CreateContainer within sandbox \"61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:12:50.542001 containerd[1504]: time="2025-07-06T23:12:50.541754734Z" level=info msg="CreateContainer within sandbox \"31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21\"" Jul 6 23:12:50.543058 containerd[1504]: time="2025-07-06T23:12:50.542888320Z" level=info msg="StartContainer for \"24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21\"" Jul 6 23:12:50.560003 containerd[1504]: time="2025-07-06T23:12:50.559873471Z" level=info msg="CreateContainer within sandbox \"317dec190ae945be8fcd6a0dc254687f91010eda22a949228173f59aca24d5d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"082723c2d6af7c078e4f7dc0e94745236cce454c56212518f641017264ef179b\"" Jul 6 23:12:50.561659 containerd[1504]: time="2025-07-06T23:12:50.561321142Z" level=info msg="StartContainer for \"082723c2d6af7c078e4f7dc0e94745236cce454c56212518f641017264ef179b\"" Jul 6 23:12:50.572538 containerd[1504]: time="2025-07-06T23:12:50.572461673Z" level=info msg="CreateContainer within sandbox \"61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac\"" Jul 6 23:12:50.574649 containerd[1504]: time="2025-07-06T23:12:50.573485110Z" level=info msg="StartContainer for \"a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac\"" Jul 6 23:12:50.588776 systemd[1]: Started cri-containerd-24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21.scope - libcontainer container 24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21. Jul 6 23:12:50.600707 kubelet[2336]: E0706 23:12:50.599497 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.31.190:6443/api/v1/namespaces/default/events\": dial tcp 49.13.31.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-1-3-0a35d13a56.184fcc71cd73bfcd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-3-0a35d13a56,UID:ci-4230-2-1-3-0a35d13a56,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-3-0a35d13a56,},FirstTimestamp:2025-07-06 23:12:49.220698061 +0000 UTC m=+0.427355220,LastTimestamp:2025-07-06 23:12:49.220698061 +0000 UTC m=+0.427355220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-3-0a35d13a56,}" Jul 6 23:12:50.614007 systemd[1]: Started cri-containerd-082723c2d6af7c078e4f7dc0e94745236cce454c56212518f641017264ef179b.scope - libcontainer container 082723c2d6af7c078e4f7dc0e94745236cce454c56212518f641017264ef179b. Jul 6 23:12:50.627802 systemd[1]: Started cri-containerd-a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac.scope - libcontainer container a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac. Jul 6 23:12:50.637080 kubelet[2336]: W0706 23:12:50.636907 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.31.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.31.190:6443: connect: connection refused Jul 6 23:12:50.637238 kubelet[2336]: E0706 23:12:50.637096 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.31.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.31.190:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:12:50.644351 kubelet[2336]: E0706 23:12:50.644298 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.31.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-0a35d13a56?timeout=10s\": dial tcp 49.13.31.190:6443: connect: connection refused" interval="1.6s" Jul 6 23:12:50.675076 containerd[1504]: time="2025-07-06T23:12:50.674809737Z" level=info msg="StartContainer for \"082723c2d6af7c078e4f7dc0e94745236cce454c56212518f641017264ef179b\" returns successfully" Jul 6 23:12:50.683911 containerd[1504]: time="2025-07-06T23:12:50.683782523Z" level=info msg="StartContainer for \"24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21\" returns successfully" Jul 6 23:12:50.704425 containerd[1504]: time="2025-07-06T23:12:50.704342520Z" level=info msg="StartContainer for \"a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac\" returns successfully" Jul 6 23:12:50.838383 kubelet[2336]: I0706 23:12:50.837108 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:51.281564 kubelet[2336]: E0706 23:12:51.281368 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:51.285575 kubelet[2336]: E0706 23:12:51.284814 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:51.290260 kubelet[2336]: E0706 23:12:51.290087 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.291295 kubelet[2336]: E0706 23:12:52.291051 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.292841 kubelet[2336]: E0706 23:12:52.292100 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.295637 kubelet[2336]: E0706 23:12:52.293361 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.642574 kubelet[2336]: E0706 23:12:52.642276 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-1-3-0a35d13a56\" not found" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.772361 kubelet[2336]: I0706 23:12:52.772112 2336 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.837186 kubelet[2336]: I0706 23:12:52.837138 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.849682 kubelet[2336]: E0706 23:12:52.849417 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.849682 kubelet[2336]: I0706 23:12:52.849451 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.855523 kubelet[2336]: E0706 23:12:52.853284 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.855523 kubelet[2336]: I0706 23:12:52.853316 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:52.858142 kubelet[2336]: E0706 23:12:52.858116 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-1-3-0a35d13a56\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:53.230805 kubelet[2336]: I0706 23:12:53.230722 2336 apiserver.go:52] "Watching apiserver" Jul 6 23:12:53.237427 kubelet[2336]: I0706 23:12:53.237373 2336 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:12:55.228440 systemd[1]: Reload requested from client PID 2611 ('systemctl') (unit session-7.scope)... Jul 6 23:12:55.228897 systemd[1]: Reloading... Jul 6 23:12:55.359537 zram_generator::config[2665]: No configuration found. Jul 6 23:12:55.451204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:12:55.566630 systemd[1]: Reloading finished in 337 ms. Jul 6 23:12:55.596001 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:55.611039 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:12:55.611546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:55.611687 systemd[1]: kubelet.service: Consumed 859ms CPU time, 127.8M memory peak. Jul 6 23:12:55.620255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:12:55.758706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:12:55.763730 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:12:55.819814 kubelet[2701]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:12:55.819814 kubelet[2701]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:12:55.819814 kubelet[2701]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:12:55.819814 kubelet[2701]: I0706 23:12:55.819542 2701 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:12:55.831225 kubelet[2701]: I0706 23:12:55.830465 2701 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:12:55.832307 kubelet[2701]: I0706 23:12:55.832170 2701 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:12:55.832788 kubelet[2701]: I0706 23:12:55.832771 2701 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:12:55.834853 kubelet[2701]: I0706 23:12:55.834817 2701 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:12:55.838880 kubelet[2701]: I0706 23:12:55.837901 2701 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:12:55.844217 kubelet[2701]: E0706 23:12:55.844178 2701 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:12:55.844457 kubelet[2701]: I0706 23:12:55.844346 2701 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:12:55.851479 kubelet[2701]: I0706 23:12:55.851417 2701 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:12:55.853905 kubelet[2701]: I0706 23:12:55.853845 2701 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:12:55.854077 kubelet[2701]: I0706 23:12:55.853892 2701 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-3-0a35d13a56","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:12:55.854077 kubelet[2701]: I0706 23:12:55.854075 2701 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:12:55.854204 kubelet[2701]: I0706 23:12:55.854085 2701 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:12:55.854204 kubelet[2701]: I0706 23:12:55.854132 2701 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:12:55.854323 kubelet[2701]: I0706 23:12:55.854296 2701 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:12:55.854323 kubelet[2701]: I0706 23:12:55.854315 2701 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:12:55.854404 kubelet[2701]: I0706 23:12:55.854334 2701 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:12:55.854404 kubelet[2701]: I0706 23:12:55.854344 2701 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:12:55.860532 kubelet[2701]: I0706 23:12:55.858775 2701 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:12:55.860532 kubelet[2701]: I0706 23:12:55.859746 2701 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:12:55.861040 kubelet[2701]: I0706 23:12:55.861011 2701 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:12:55.861159 kubelet[2701]: I0706 23:12:55.861142 2701 server.go:1287] "Started kubelet" Jul 6 23:12:55.865153 kubelet[2701]: I0706 23:12:55.865120 2701 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:12:55.875664 kubelet[2701]: I0706 23:12:55.875611 2701 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:12:55.877519 kubelet[2701]: I0706 23:12:55.877008 2701 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:12:55.880540 kubelet[2701]: I0706 23:12:55.879981 2701 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:12:55.880540 kubelet[2701]: I0706 23:12:55.880192 2701 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:12:55.880540 kubelet[2701]: I0706 23:12:55.880420 2701 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:12:55.883188 kubelet[2701]: I0706 23:12:55.883156 2701 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:12:55.888761 kubelet[2701]: I0706 23:12:55.888720 2701 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:12:55.888861 kubelet[2701]: I0706 23:12:55.888857 2701 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:12:55.890499 kubelet[2701]: I0706 23:12:55.890442 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:12:55.894682 kubelet[2701]: I0706 23:12:55.891327 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:12:55.894682 kubelet[2701]: I0706 23:12:55.891355 2701 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:12:55.894682 kubelet[2701]: I0706 23:12:55.891385 2701 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:12:55.894682 kubelet[2701]: I0706 23:12:55.891394 2701 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:12:55.894682 kubelet[2701]: E0706 23:12:55.891432 2701 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:12:55.901678 kubelet[2701]: I0706 23:12:55.901641 2701 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:12:55.901806 kubelet[2701]: I0706 23:12:55.901751 2701 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:12:55.902968 kubelet[2701]: E0706 23:12:55.902932 2701 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:12:55.905062 kubelet[2701]: I0706 23:12:55.904268 2701 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:12:55.964331 kubelet[2701]: I0706 23:12:55.964296 2701 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:12:55.964617 kubelet[2701]: I0706 23:12:55.964594 2701 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:12:55.964748 kubelet[2701]: I0706 23:12:55.964733 2701 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:12:55.965063 kubelet[2701]: I0706 23:12:55.965040 2701 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:12:55.965179 kubelet[2701]: I0706 23:12:55.965144 2701 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:12:55.965262 kubelet[2701]: I0706 23:12:55.965248 2701 policy_none.go:49] "None policy: Start" Jul 6 23:12:55.965355 kubelet[2701]: I0706 23:12:55.965341 2701 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:12:55.965470 kubelet[2701]: I0706 23:12:55.965455 2701 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:12:55.965811 kubelet[2701]: I0706 23:12:55.965779 2701 state_mem.go:75] "Updated machine memory state" Jul 6 23:12:55.971100 kubelet[2701]: I0706 23:12:55.971062 2701 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:12:55.971283 kubelet[2701]: I0706 23:12:55.971257 2701 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:12:55.971335 kubelet[2701]: I0706 23:12:55.971277 2701 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:12:55.971805 kubelet[2701]: I0706 23:12:55.971775 2701 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:12:55.974586 kubelet[2701]: E0706 23:12:55.974557 2701 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:12:55.992249 kubelet[2701]: I0706 23:12:55.992196 2701 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:55.992822 kubelet[2701]: I0706 23:12:55.992738 2701 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:55.993054 kubelet[2701]: I0706 23:12:55.993026 2701 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.082870 kubelet[2701]: I0706 23:12:56.081010 2701 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.090954 kubelet[2701]: I0706 23:12:56.090884 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.090954 kubelet[2701]: I0706 23:12:56.090962 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091360 kubelet[2701]: I0706 23:12:56.091024 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091360 kubelet[2701]: I0706 23:12:56.091059 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f9feb38e2b75a82349814e7923f075-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" (UID: \"92f9feb38e2b75a82349814e7923f075\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091360 kubelet[2701]: I0706 23:12:56.091090 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f9feb38e2b75a82349814e7923f075-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" (UID: \"92f9feb38e2b75a82349814e7923f075\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091360 kubelet[2701]: I0706 23:12:56.091117 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091360 kubelet[2701]: I0706 23:12:56.091147 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87a414a1848cebd50fd2e293d48f0f5f-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-3-0a35d13a56\" (UID: \"87a414a1848cebd50fd2e293d48f0f5f\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091875 kubelet[2701]: I0706 23:12:56.091174 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/399a78aab121c00ba879ae339058e519-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-3-0a35d13a56\" (UID: \"399a78aab121c00ba879ae339058e519\") " pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.091875 kubelet[2701]: I0706 23:12:56.091202 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f9feb38e2b75a82349814e7923f075-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-3-0a35d13a56\" (UID: \"92f9feb38e2b75a82349814e7923f075\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.099253 kubelet[2701]: I0706 23:12:56.099158 2701 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.099253 kubelet[2701]: I0706 23:12:56.099249 2701 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-1-3-0a35d13a56" Jul 6 23:12:56.234490 sudo[2735]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:12:56.234874 sudo[2735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:12:56.776054 sudo[2735]: pam_unix(sudo:session): session closed for user root Jul 6 23:12:56.858327 kubelet[2701]: I0706 23:12:56.857855 2701 apiserver.go:52] "Watching apiserver" Jul 6 23:12:56.889773 kubelet[2701]: I0706 23:12:56.889692 2701 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:12:56.986061 kubelet[2701]: I0706 23:12:56.985109 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-1-3-0a35d13a56" podStartSLOduration=1.9850853320000001 podStartE2EDuration="1.985085332s" podCreationTimestamp="2025-07-06 23:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:12:56.969038802 +0000 UTC m=+1.200429328" watchObservedRunningTime="2025-07-06 23:12:56.985085332 +0000 UTC m=+1.216475898" Jul 6 23:12:57.000200 kubelet[2701]: I0706 23:12:56.999808 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-1-3-0a35d13a56" podStartSLOduration=1.999789307 podStartE2EDuration="1.999789307s" podCreationTimestamp="2025-07-06 23:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:12:56.98546397 +0000 UTC m=+1.216854536" watchObservedRunningTime="2025-07-06 23:12:56.999789307 +0000 UTC m=+1.231179833" Jul 6 23:12:57.018845 kubelet[2701]: I0706 23:12:57.018647 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" podStartSLOduration=2.01862514 podStartE2EDuration="2.01862514s" podCreationTimestamp="2025-07-06 23:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:12:57.001841964 +0000 UTC m=+1.233232490" watchObservedRunningTime="2025-07-06 23:12:57.01862514 +0000 UTC m=+1.250015706" Jul 6 23:12:59.107043 sudo[1798]: pam_unix(sudo:session): session closed for user root Jul 6 23:12:59.284894 sshd[1797]: Connection closed by 139.178.89.65 port 48802 Jul 6 23:12:59.285777 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:59.291177 systemd[1]: sshd@7-49.13.31.190:22-139.178.89.65:48802.service: Deactivated successfully. Jul 6 23:12:59.294370 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:12:59.294832 systemd[1]: session-7.scope: Consumed 7.375s CPU time, 263.8M memory peak. Jul 6 23:12:59.296326 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:12:59.297404 systemd-logind[1483]: Removed session 7. Jul 6 23:13:00.937180 kubelet[2701]: I0706 23:13:00.937027 2701 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:13:00.938488 kubelet[2701]: I0706 23:13:00.938242 2701 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:13:00.938581 containerd[1504]: time="2025-07-06T23:13:00.937825103Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:13:01.730955 systemd[1]: Created slice kubepods-besteffort-pod9d6077a8_279f_4912_bf57_1b6b2d55519a.slice - libcontainer container kubepods-besteffort-pod9d6077a8_279f_4912_bf57_1b6b2d55519a.slice. Jul 6 23:13:01.764829 systemd[1]: Created slice kubepods-burstable-pod104def78_52ea_4efd_93f4_d3d940ae9b38.slice - libcontainer container kubepods-burstable-pod104def78_52ea_4efd_93f4_d3d940ae9b38.slice. Jul 6 23:13:01.831596 kubelet[2701]: I0706 23:13:01.830346 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-cgroup\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831596 kubelet[2701]: I0706 23:13:01.830414 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cni-path\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831596 kubelet[2701]: I0706 23:13:01.830475 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-etc-cni-netd\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831596 kubelet[2701]: I0706 23:13:01.830518 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-kernel\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831596 kubelet[2701]: I0706 23:13:01.830559 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d6077a8-279f-4912-bf57-1b6b2d55519a-lib-modules\") pod \"kube-proxy-4qnns\" (UID: \"9d6077a8-279f-4912-bf57-1b6b2d55519a\") " pod="kube-system/kube-proxy-4qnns" Jul 6 23:13:01.831596 kubelet[2701]: I0706 23:13:01.830596 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/104def78-52ea-4efd-93f4-d3d940ae9b38-clustermesh-secrets\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831957 kubelet[2701]: I0706 23:13:01.830623 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-hubble-tls\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831957 kubelet[2701]: I0706 23:13:01.830652 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d6077a8-279f-4912-bf57-1b6b2d55519a-xtables-lock\") pod \"kube-proxy-4qnns\" (UID: \"9d6077a8-279f-4912-bf57-1b6b2d55519a\") " pod="kube-system/kube-proxy-4qnns" Jul 6 23:13:01.831957 kubelet[2701]: I0706 23:13:01.830682 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-bpf-maps\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831957 kubelet[2701]: I0706 23:13:01.830718 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-xtables-lock\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831957 kubelet[2701]: I0706 23:13:01.830751 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-config-path\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.831957 kubelet[2701]: I0706 23:13:01.830778 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-net\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.832152 kubelet[2701]: I0706 23:13:01.830811 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-run\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.832152 kubelet[2701]: I0706 23:13:01.830930 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskcw\" (UniqueName: \"kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-kube-api-access-fskcw\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.832152 kubelet[2701]: I0706 23:13:01.830965 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d6077a8-279f-4912-bf57-1b6b2d55519a-kube-proxy\") pod \"kube-proxy-4qnns\" (UID: \"9d6077a8-279f-4912-bf57-1b6b2d55519a\") " pod="kube-system/kube-proxy-4qnns" Jul 6 23:13:01.832152 kubelet[2701]: I0706 23:13:01.830999 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm998\" (UniqueName: \"kubernetes.io/projected/9d6077a8-279f-4912-bf57-1b6b2d55519a-kube-api-access-jm998\") pod \"kube-proxy-4qnns\" (UID: \"9d6077a8-279f-4912-bf57-1b6b2d55519a\") " pod="kube-system/kube-proxy-4qnns" Jul 6 23:13:01.832152 kubelet[2701]: I0706 23:13:01.831028 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-hostproc\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:01.832152 kubelet[2701]: I0706 23:13:01.831052 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-lib-modules\") pod \"cilium-wb7s4\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " pod="kube-system/cilium-wb7s4" Jul 6 23:13:02.041689 containerd[1504]: time="2025-07-06T23:13:02.041188415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qnns,Uid:9d6077a8-279f-4912-bf57-1b6b2d55519a,Namespace:kube-system,Attempt:0,}" Jul 6 23:13:02.071086 containerd[1504]: time="2025-07-06T23:13:02.071006007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wb7s4,Uid:104def78-52ea-4efd-93f4-d3d940ae9b38,Namespace:kube-system,Attempt:0,}" Jul 6 23:13:02.088685 containerd[1504]: time="2025-07-06T23:13:02.085794823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:13:02.088685 containerd[1504]: time="2025-07-06T23:13:02.085879077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:13:02.088685 containerd[1504]: time="2025-07-06T23:13:02.085891119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:02.088685 containerd[1504]: time="2025-07-06T23:13:02.086030901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:02.123815 systemd[1]: Started cri-containerd-a0e62697caaf693378c3a18fc897825230ad5edf495d001446295a62219fc28a.scope - libcontainer container a0e62697caaf693378c3a18fc897825230ad5edf495d001446295a62219fc28a. Jul 6 23:13:02.141064 systemd[1]: Created slice kubepods-besteffort-podb7be8a7a_8777_4cd2_8363_3f5e9cd2b0db.slice - libcontainer container kubepods-besteffort-podb7be8a7a_8777_4cd2_8363_3f5e9cd2b0db.slice. Jul 6 23:13:02.148697 containerd[1504]: time="2025-07-06T23:13:02.148538025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:13:02.148816 containerd[1504]: time="2025-07-06T23:13:02.148723975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:13:02.149785 containerd[1504]: time="2025-07-06T23:13:02.149585954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:02.149895 containerd[1504]: time="2025-07-06T23:13:02.149797028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:02.181771 systemd[1]: Started cri-containerd-e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043.scope - libcontainer container e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043. Jul 6 23:13:02.190854 containerd[1504]: time="2025-07-06T23:13:02.190407073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qnns,Uid:9d6077a8-279f-4912-bf57-1b6b2d55519a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0e62697caaf693378c3a18fc897825230ad5edf495d001446295a62219fc28a\"" Jul 6 23:13:02.198417 containerd[1504]: time="2025-07-06T23:13:02.198375714Z" level=info msg="CreateContainer within sandbox \"a0e62697caaf693378c3a18fc897825230ad5edf495d001446295a62219fc28a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:13:02.226638 containerd[1504]: time="2025-07-06T23:13:02.226574445Z" level=info msg="CreateContainer within sandbox \"a0e62697caaf693378c3a18fc897825230ad5edf495d001446295a62219fc28a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"632578688b1f3871ff0e598b8744bee18c1f987d4ec55dd96be9f750265cb5b3\"" Jul 6 23:13:02.226814 containerd[1504]: time="2025-07-06T23:13:02.226717788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wb7s4,Uid:104def78-52ea-4efd-93f4-d3d940ae9b38,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\"" Jul 6 23:13:02.228358 containerd[1504]: time="2025-07-06T23:13:02.228186984Z" level=info msg="StartContainer for \"632578688b1f3871ff0e598b8744bee18c1f987d4ec55dd96be9f750265cb5b3\"" Jul 6 23:13:02.232773 containerd[1504]: time="2025-07-06T23:13:02.232734154Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:13:02.234399 kubelet[2701]: I0706 23:13:02.234322 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xjcbm\" (UID: \"b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db\") " pod="kube-system/cilium-operator-6c4d7847fc-xjcbm" Jul 6 23:13:02.234399 kubelet[2701]: I0706 23:13:02.234397 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrq6\" (UniqueName: \"kubernetes.io/projected/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-kube-api-access-vdrq6\") pod \"cilium-operator-6c4d7847fc-xjcbm\" (UID: \"b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db\") " pod="kube-system/cilium-operator-6c4d7847fc-xjcbm" Jul 6 23:13:02.268802 systemd[1]: Started cri-containerd-632578688b1f3871ff0e598b8744bee18c1f987d4ec55dd96be9f750265cb5b3.scope - libcontainer container 632578688b1f3871ff0e598b8744bee18c1f987d4ec55dd96be9f750265cb5b3. Jul 6 23:13:02.311386 containerd[1504]: time="2025-07-06T23:13:02.310837505Z" level=info msg="StartContainer for \"632578688b1f3871ff0e598b8744bee18c1f987d4ec55dd96be9f750265cb5b3\" returns successfully" Jul 6 23:13:02.448859 containerd[1504]: time="2025-07-06T23:13:02.448301634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xjcbm,Uid:b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db,Namespace:kube-system,Attempt:0,}" Jul 6 23:13:02.477015 containerd[1504]: time="2025-07-06T23:13:02.476892388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:13:02.477015 containerd[1504]: time="2025-07-06T23:13:02.476962959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:13:02.477015 containerd[1504]: time="2025-07-06T23:13:02.476985923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:02.478885 containerd[1504]: time="2025-07-06T23:13:02.477071336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:02.502280 systemd[1]: Started cri-containerd-eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f.scope - libcontainer container eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f. Jul 6 23:13:02.563066 containerd[1504]: time="2025-07-06T23:13:02.562603800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xjcbm,Uid:b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db,Namespace:kube-system,Attempt:0,} returns sandbox id \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\"" Jul 6 23:13:03.005908 kubelet[2701]: I0706 23:13:03.005838 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4qnns" podStartSLOduration=2.005818429 podStartE2EDuration="2.005818429s" podCreationTimestamp="2025-07-06 23:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:13:02.99225468 +0000 UTC m=+7.223645246" watchObservedRunningTime="2025-07-06 23:13:03.005818429 +0000 UTC m=+7.237208955" Jul 6 23:13:06.438778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23857862.mount: Deactivated successfully. Jul 6 23:13:08.041427 containerd[1504]: time="2025-07-06T23:13:08.040002720Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:13:08.041427 containerd[1504]: time="2025-07-06T23:13:08.041357417Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:13:08.042097 containerd[1504]: time="2025-07-06T23:13:08.042064390Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:13:08.044008 containerd[1504]: time="2025-07-06T23:13:08.043958557Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.811058936s" Jul 6 23:13:08.044008 containerd[1504]: time="2025-07-06T23:13:08.044005003Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:13:08.046967 containerd[1504]: time="2025-07-06T23:13:08.046928225Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:13:08.048858 containerd[1504]: time="2025-07-06T23:13:08.048820152Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:13:08.073637 containerd[1504]: time="2025-07-06T23:13:08.073590867Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\"" Jul 6 23:13:08.074814 containerd[1504]: time="2025-07-06T23:13:08.074782622Z" level=info msg="StartContainer for \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\"" Jul 6 23:13:08.117745 systemd[1]: Started cri-containerd-1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35.scope - libcontainer container 1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35. Jul 6 23:13:08.156431 containerd[1504]: time="2025-07-06T23:13:08.156229338Z" level=info msg="StartContainer for \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\" returns successfully" Jul 6 23:13:08.174340 systemd[1]: cri-containerd-1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35.scope: Deactivated successfully. Jul 6 23:13:08.394079 containerd[1504]: time="2025-07-06T23:13:08.393539329Z" level=info msg="shim disconnected" id=1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35 namespace=k8s.io Jul 6 23:13:08.394079 containerd[1504]: time="2025-07-06T23:13:08.393625300Z" level=warning msg="cleaning up after shim disconnected" id=1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35 namespace=k8s.io Jul 6 23:13:08.394079 containerd[1504]: time="2025-07-06T23:13:08.393711552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:08.407191 containerd[1504]: time="2025-07-06T23:13:08.407098540Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:13:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:13:08.987988 containerd[1504]: time="2025-07-06T23:13:08.987846980Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:13:09.015583 containerd[1504]: time="2025-07-06T23:13:09.015009834Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\"" Jul 6 23:13:09.018679 containerd[1504]: time="2025-07-06T23:13:09.017736499Z" level=info msg="StartContainer for \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\"" Jul 6 23:13:09.053869 systemd[1]: Started cri-containerd-6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031.scope - libcontainer container 6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031. Jul 6 23:13:09.063354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35-rootfs.mount: Deactivated successfully. Jul 6 23:13:09.099825 containerd[1504]: time="2025-07-06T23:13:09.099767405Z" level=info msg="StartContainer for \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\" returns successfully" Jul 6 23:13:09.114898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:13:09.115133 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:13:09.115759 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:13:09.125571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:13:09.129030 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:13:09.130932 systemd[1]: cri-containerd-6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031.scope: Deactivated successfully. Jul 6 23:13:09.150650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:13:09.163832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031-rootfs.mount: Deactivated successfully. Jul 6 23:13:09.170501 containerd[1504]: time="2025-07-06T23:13:09.170348622Z" level=info msg="shim disconnected" id=6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031 namespace=k8s.io Jul 6 23:13:09.170501 containerd[1504]: time="2025-07-06T23:13:09.170482479Z" level=warning msg="cleaning up after shim disconnected" id=6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031 namespace=k8s.io Jul 6 23:13:09.170501 containerd[1504]: time="2025-07-06T23:13:09.170494080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:09.998125 containerd[1504]: time="2025-07-06T23:13:09.996288277Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:13:10.024871 containerd[1504]: time="2025-07-06T23:13:10.024813519Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\"" Jul 6 23:13:10.026320 containerd[1504]: time="2025-07-06T23:13:10.026158844Z" level=info msg="StartContainer for \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\"" Jul 6 23:13:10.067801 systemd[1]: Started cri-containerd-77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30.scope - libcontainer container 77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30. Jul 6 23:13:10.072464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount683998220.mount: Deactivated successfully. Jul 6 23:13:10.133441 containerd[1504]: time="2025-07-06T23:13:10.133301370Z" level=info msg="StartContainer for \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\" returns successfully" Jul 6 23:13:10.134385 systemd[1]: cri-containerd-77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30.scope: Deactivated successfully. Jul 6 23:13:10.175726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30-rootfs.mount: Deactivated successfully. Jul 6 23:13:10.203195 containerd[1504]: time="2025-07-06T23:13:10.203046381Z" level=info msg="shim disconnected" id=77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30 namespace=k8s.io Jul 6 23:13:10.203638 containerd[1504]: time="2025-07-06T23:13:10.203420827Z" level=warning msg="cleaning up after shim disconnected" id=77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30 namespace=k8s.io Jul 6 23:13:10.203638 containerd[1504]: time="2025-07-06T23:13:10.203437869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:10.340697 containerd[1504]: time="2025-07-06T23:13:10.338985845Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:13:10.340697 containerd[1504]: time="2025-07-06T23:13:10.340384937Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:13:10.341926 containerd[1504]: time="2025-07-06T23:13:10.341819914Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:13:10.347564 containerd[1504]: time="2025-07-06T23:13:10.346483887Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.299340634s" Jul 6 23:13:10.347564 containerd[1504]: time="2025-07-06T23:13:10.346588780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:13:10.354720 containerd[1504]: time="2025-07-06T23:13:10.354662532Z" level=info msg="CreateContainer within sandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:13:10.376552 containerd[1504]: time="2025-07-06T23:13:10.376458090Z" level=info msg="CreateContainer within sandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\"" Jul 6 23:13:10.377889 containerd[1504]: time="2025-07-06T23:13:10.377802495Z" level=info msg="StartContainer for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\"" Jul 6 23:13:10.415271 systemd[1]: Started cri-containerd-28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8.scope - libcontainer container 28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8. Jul 6 23:13:10.452945 containerd[1504]: time="2025-07-06T23:13:10.452482232Z" level=info msg="StartContainer for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" returns successfully" Jul 6 23:13:11.004597 containerd[1504]: time="2025-07-06T23:13:11.004242902Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:13:11.027217 containerd[1504]: time="2025-07-06T23:13:11.027144636Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\"" Jul 6 23:13:11.030090 containerd[1504]: time="2025-07-06T23:13:11.027797794Z" level=info msg="StartContainer for \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\"" Jul 6 23:13:11.091799 systemd[1]: Started cri-containerd-ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069.scope - libcontainer container ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069. Jul 6 23:13:11.119361 kubelet[2701]: I0706 23:13:11.119299 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xjcbm" podStartSLOduration=1.333744948 podStartE2EDuration="9.119282156s" podCreationTimestamp="2025-07-06 23:13:02 +0000 UTC" firstStartedPulling="2025-07-06 23:13:02.565842841 +0000 UTC m=+6.797233367" lastFinishedPulling="2025-07-06 23:13:10.351380049 +0000 UTC m=+14.582770575" observedRunningTime="2025-07-06 23:13:11.114746335 +0000 UTC m=+15.346136861" watchObservedRunningTime="2025-07-06 23:13:11.119282156 +0000 UTC m=+15.350672682" Jul 6 23:13:11.154307 systemd[1]: cri-containerd-ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069.scope: Deactivated successfully. Jul 6 23:13:11.156952 containerd[1504]: time="2025-07-06T23:13:11.156902368Z" level=info msg="StartContainer for \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\" returns successfully" Jul 6 23:13:11.192243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069-rootfs.mount: Deactivated successfully. Jul 6 23:13:11.204857 containerd[1504]: time="2025-07-06T23:13:11.204639427Z" level=info msg="shim disconnected" id=ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069 namespace=k8s.io Jul 6 23:13:11.204857 containerd[1504]: time="2025-07-06T23:13:11.204776403Z" level=warning msg="cleaning up after shim disconnected" id=ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069 namespace=k8s.io Jul 6 23:13:11.204857 containerd[1504]: time="2025-07-06T23:13:11.204786485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:12.013032 containerd[1504]: time="2025-07-06T23:13:12.012838277Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:13:12.035014 containerd[1504]: time="2025-07-06T23:13:12.034845512Z" level=info msg="CreateContainer within sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\"" Jul 6 23:13:12.036331 containerd[1504]: time="2025-07-06T23:13:12.035987444Z" level=info msg="StartContainer for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\"" Jul 6 23:13:12.069757 systemd[1]: Started cri-containerd-3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7.scope - libcontainer container 3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7. Jul 6 23:13:12.104294 containerd[1504]: time="2025-07-06T23:13:12.104228368Z" level=info msg="StartContainer for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" returns successfully" Jul 6 23:13:12.282600 kubelet[2701]: I0706 23:13:12.281763 2701 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:13:12.335052 systemd[1]: Created slice kubepods-burstable-pod1e722ce2_7126_45f9_8e79_a6a377ffab9c.slice - libcontainer container kubepods-burstable-pod1e722ce2_7126_45f9_8e79_a6a377ffab9c.slice. Jul 6 23:13:12.344888 systemd[1]: Created slice kubepods-burstable-pod98d0a10f_b83d_4de7_9eea_5e152df14ab5.slice - libcontainer container kubepods-burstable-pod98d0a10f_b83d_4de7_9eea_5e152df14ab5.slice. Jul 6 23:13:12.412088 kubelet[2701]: I0706 23:13:12.411784 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqhc\" (UniqueName: \"kubernetes.io/projected/1e722ce2-7126-45f9-8e79-a6a377ffab9c-kube-api-access-knqhc\") pod \"coredns-668d6bf9bc-4rsg6\" (UID: \"1e722ce2-7126-45f9-8e79-a6a377ffab9c\") " pod="kube-system/coredns-668d6bf9bc-4rsg6" Jul 6 23:13:12.412088 kubelet[2701]: I0706 23:13:12.411835 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98wvb\" (UniqueName: \"kubernetes.io/projected/98d0a10f-b83d-4de7-9eea-5e152df14ab5-kube-api-access-98wvb\") pod \"coredns-668d6bf9bc-rqdmm\" (UID: \"98d0a10f-b83d-4de7-9eea-5e152df14ab5\") " pod="kube-system/coredns-668d6bf9bc-rqdmm" Jul 6 23:13:12.412088 kubelet[2701]: I0706 23:13:12.411870 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e722ce2-7126-45f9-8e79-a6a377ffab9c-config-volume\") pod \"coredns-668d6bf9bc-4rsg6\" (UID: \"1e722ce2-7126-45f9-8e79-a6a377ffab9c\") " pod="kube-system/coredns-668d6bf9bc-4rsg6" Jul 6 23:13:12.412088 kubelet[2701]: I0706 23:13:12.411898 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98d0a10f-b83d-4de7-9eea-5e152df14ab5-config-volume\") pod \"coredns-668d6bf9bc-rqdmm\" (UID: \"98d0a10f-b83d-4de7-9eea-5e152df14ab5\") " pod="kube-system/coredns-668d6bf9bc-rqdmm" Jul 6 23:13:12.640847 containerd[1504]: time="2025-07-06T23:13:12.640686616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rsg6,Uid:1e722ce2-7126-45f9-8e79-a6a377ffab9c,Namespace:kube-system,Attempt:0,}" Jul 6 23:13:12.649928 containerd[1504]: time="2025-07-06T23:13:12.649564487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqdmm,Uid:98d0a10f-b83d-4de7-9eea-5e152df14ab5,Namespace:kube-system,Attempt:0,}" Jul 6 23:13:14.472590 systemd-networkd[1397]: cilium_host: Link UP Jul 6 23:13:14.472760 systemd-networkd[1397]: cilium_net: Link UP Jul 6 23:13:14.472894 systemd-networkd[1397]: cilium_net: Gained carrier Jul 6 23:13:14.473015 systemd-networkd[1397]: cilium_host: Gained carrier Jul 6 23:13:14.596252 systemd-networkd[1397]: cilium_vxlan: Link UP Jul 6 23:13:14.597020 systemd-networkd[1397]: cilium_vxlan: Gained carrier Jul 6 23:13:14.885557 kernel: NET: Registered PF_ALG protocol family Jul 6 23:13:14.951750 systemd-networkd[1397]: cilium_host: Gained IPv6LL Jul 6 23:13:15.295777 systemd-networkd[1397]: cilium_net: Gained IPv6LL Jul 6 23:13:15.681722 systemd-networkd[1397]: lxc_health: Link UP Jul 6 23:13:15.693713 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Jul 6 23:13:15.707329 systemd-networkd[1397]: lxc_health: Gained carrier Jul 6 23:13:16.104673 kubelet[2701]: I0706 23:13:16.104253 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wb7s4" podStartSLOduration=9.28906688 podStartE2EDuration="15.104222096s" podCreationTimestamp="2025-07-06 23:13:01 +0000 UTC" firstStartedPulling="2025-07-06 23:13:02.230734153 +0000 UTC m=+6.462124679" lastFinishedPulling="2025-07-06 23:13:08.045889409 +0000 UTC m=+12.277279895" observedRunningTime="2025-07-06 23:13:13.037869822 +0000 UTC m=+17.269260348" watchObservedRunningTime="2025-07-06 23:13:16.104222096 +0000 UTC m=+20.335612662" Jul 6 23:13:16.224389 systemd-networkd[1397]: lxcc64a044f7bf8: Link UP Jul 6 23:13:16.229567 kernel: eth0: renamed from tmp8c7e4 Jul 6 23:13:16.243993 systemd-networkd[1397]: lxce6d4b2815660: Link UP Jul 6 23:13:16.244219 systemd-networkd[1397]: lxcc64a044f7bf8: Gained carrier Jul 6 23:13:16.247709 kernel: eth0: renamed from tmp312db Jul 6 23:13:16.255986 systemd-networkd[1397]: lxce6d4b2815660: Gained carrier Jul 6 23:13:16.767699 systemd-networkd[1397]: lxc_health: Gained IPv6LL Jul 6 23:13:17.855786 systemd-networkd[1397]: lxcc64a044f7bf8: Gained IPv6LL Jul 6 23:13:18.242396 systemd-networkd[1397]: lxce6d4b2815660: Gained IPv6LL Jul 6 23:13:20.425092 containerd[1504]: time="2025-07-06T23:13:20.424948445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:13:20.425092 containerd[1504]: time="2025-07-06T23:13:20.425027932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:13:20.425092 containerd[1504]: time="2025-07-06T23:13:20.425044214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:20.425538 containerd[1504]: time="2025-07-06T23:13:20.425128542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:20.456000 systemd[1]: Started cri-containerd-8c7e4d29cd456eb0dce273a30fdaaa00e5fb694982684c18c810790dc235a29d.scope - libcontainer container 8c7e4d29cd456eb0dce273a30fdaaa00e5fb694982684c18c810790dc235a29d. Jul 6 23:13:20.499825 containerd[1504]: time="2025-07-06T23:13:20.499731005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:13:20.501748 containerd[1504]: time="2025-07-06T23:13:20.501676952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:13:20.501941 containerd[1504]: time="2025-07-06T23:13:20.501917536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:20.502200 containerd[1504]: time="2025-07-06T23:13:20.502157599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:13:20.527941 systemd[1]: run-containerd-runc-k8s.io-312dbc0a92bee9f61dfb47e77fa3026a724625b40c4b230a0f4619b14bb5ffb3-runc.TYXbnl.mount: Deactivated successfully. Jul 6 23:13:20.538889 systemd[1]: Started cri-containerd-312dbc0a92bee9f61dfb47e77fa3026a724625b40c4b230a0f4619b14bb5ffb3.scope - libcontainer container 312dbc0a92bee9f61dfb47e77fa3026a724625b40c4b230a0f4619b14bb5ffb3. Jul 6 23:13:20.550102 containerd[1504]: time="2025-07-06T23:13:20.549909876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rsg6,Uid:1e722ce2-7126-45f9-8e79-a6a377ffab9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c7e4d29cd456eb0dce273a30fdaaa00e5fb694982684c18c810790dc235a29d\"" Jul 6 23:13:20.555878 containerd[1504]: time="2025-07-06T23:13:20.555833647Z" level=info msg="CreateContainer within sandbox \"8c7e4d29cd456eb0dce273a30fdaaa00e5fb694982684c18c810790dc235a29d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:13:20.588784 containerd[1504]: time="2025-07-06T23:13:20.588744616Z" level=info msg="CreateContainer within sandbox \"8c7e4d29cd456eb0dce273a30fdaaa00e5fb694982684c18c810790dc235a29d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"339799e84a6fb4bb6974ec4ec1194055942413171e9dfa2cac2f30d11193f8f7\"" Jul 6 23:13:20.591187 containerd[1504]: time="2025-07-06T23:13:20.591074840Z" level=info msg="StartContainer for \"339799e84a6fb4bb6974ec4ec1194055942413171e9dfa2cac2f30d11193f8f7\"" Jul 6 23:13:20.615761 containerd[1504]: time="2025-07-06T23:13:20.615722773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqdmm,Uid:98d0a10f-b83d-4de7-9eea-5e152df14ab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"312dbc0a92bee9f61dfb47e77fa3026a724625b40c4b230a0f4619b14bb5ffb3\"" Jul 6 23:13:20.620050 containerd[1504]: time="2025-07-06T23:13:20.620006706Z" level=info msg="CreateContainer within sandbox \"312dbc0a92bee9f61dfb47e77fa3026a724625b40c4b230a0f4619b14bb5ffb3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:13:20.639307 systemd[1]: Started cri-containerd-339799e84a6fb4bb6974ec4ec1194055942413171e9dfa2cac2f30d11193f8f7.scope - libcontainer container 339799e84a6fb4bb6974ec4ec1194055942413171e9dfa2cac2f30d11193f8f7. Jul 6 23:13:20.645957 containerd[1504]: time="2025-07-06T23:13:20.645830112Z" level=info msg="CreateContainer within sandbox \"312dbc0a92bee9f61dfb47e77fa3026a724625b40c4b230a0f4619b14bb5ffb3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdfae2b40e3e462d0574068ce53710b2e0ce00b4c6dadf2a95cb2ff556f26c65\"" Jul 6 23:13:20.647432 containerd[1504]: time="2025-07-06T23:13:20.647326056Z" level=info msg="StartContainer for \"cdfae2b40e3e462d0574068ce53710b2e0ce00b4c6dadf2a95cb2ff556f26c65\"" Jul 6 23:13:20.682005 systemd[1]: Started cri-containerd-cdfae2b40e3e462d0574068ce53710b2e0ce00b4c6dadf2a95cb2ff556f26c65.scope - libcontainer container cdfae2b40e3e462d0574068ce53710b2e0ce00b4c6dadf2a95cb2ff556f26c65. Jul 6 23:13:20.689106 containerd[1504]: time="2025-07-06T23:13:20.688962825Z" level=info msg="StartContainer for \"339799e84a6fb4bb6974ec4ec1194055942413171e9dfa2cac2f30d11193f8f7\" returns successfully" Jul 6 23:13:20.724802 containerd[1504]: time="2025-07-06T23:13:20.724748030Z" level=info msg="StartContainer for \"cdfae2b40e3e462d0574068ce53710b2e0ce00b4c6dadf2a95cb2ff556f26c65\" returns successfully" Jul 6 23:13:21.065183 kubelet[2701]: I0706 23:13:21.062969 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rqdmm" podStartSLOduration=19.062946521 podStartE2EDuration="19.062946521s" podCreationTimestamp="2025-07-06 23:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:13:21.062439354 +0000 UTC m=+25.293830000" watchObservedRunningTime="2025-07-06 23:13:21.062946521 +0000 UTC m=+25.294337047" Jul 6 23:13:31.067262 kubelet[2701]: I0706 23:13:31.067161 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4rsg6" podStartSLOduration=29.067141382 podStartE2EDuration="29.067141382s" podCreationTimestamp="2025-07-06 23:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:13:21.112703341 +0000 UTC m=+25.344093907" watchObservedRunningTime="2025-07-06 23:13:31.067141382 +0000 UTC m=+35.298531908" Jul 6 23:14:17.012895 systemd[1]: Started sshd@8-49.13.31.190:22-195.178.110.125:54214.service - OpenSSH per-connection server daemon (195.178.110.125:54214). Jul 6 23:14:17.130910 sshd[4093]: Connection closed by authenticating user root 195.178.110.125 port 54214 [preauth] Jul 6 23:14:17.134425 systemd[1]: sshd@8-49.13.31.190:22-195.178.110.125:54214.service: Deactivated successfully. Jul 6 23:14:17.161959 systemd[1]: Started sshd@9-49.13.31.190:22-195.178.110.125:54230.service - OpenSSH per-connection server daemon (195.178.110.125:54230). Jul 6 23:14:17.269944 sshd[4098]: Connection closed by authenticating user root 195.178.110.125 port 54230 [preauth] Jul 6 23:14:17.276415 systemd[1]: sshd@9-49.13.31.190:22-195.178.110.125:54230.service: Deactivated successfully. Jul 6 23:14:17.299972 systemd[1]: Started sshd@10-49.13.31.190:22-195.178.110.125:54244.service - OpenSSH per-connection server daemon (195.178.110.125:54244). Jul 6 23:14:17.424255 sshd[4103]: Connection closed by authenticating user root 195.178.110.125 port 54244 [preauth] Jul 6 23:14:17.427288 systemd[1]: sshd@10-49.13.31.190:22-195.178.110.125:54244.service: Deactivated successfully. Jul 6 23:14:17.461806 systemd[1]: Started sshd@11-49.13.31.190:22-195.178.110.125:54256.service - OpenSSH per-connection server daemon (195.178.110.125:54256). Jul 6 23:14:17.572674 sshd[4108]: Connection closed by authenticating user root 195.178.110.125 port 54256 [preauth] Jul 6 23:14:17.577035 systemd[1]: sshd@11-49.13.31.190:22-195.178.110.125:54256.service: Deactivated successfully. Jul 6 23:14:17.607994 systemd[1]: Started sshd@12-49.13.31.190:22-195.178.110.125:54272.service - OpenSSH per-connection server daemon (195.178.110.125:54272). Jul 6 23:14:17.714594 sshd[4113]: Connection closed by authenticating user root 195.178.110.125 port 54272 [preauth] Jul 6 23:14:17.718877 systemd[1]: sshd@12-49.13.31.190:22-195.178.110.125:54272.service: Deactivated successfully. Jul 6 23:16:25.634196 systemd[1]: Started sshd@13-49.13.31.190:22-103.232.81.5:37344.service - OpenSSH per-connection server daemon (103.232.81.5:37344). Jul 6 23:16:26.059318 sshd[4138]: Connection closed by 103.232.81.5 port 37344 [preauth] Jul 6 23:16:26.062047 systemd[1]: sshd@13-49.13.31.190:22-103.232.81.5:37344.service: Deactivated successfully. Jul 6 23:16:44.371144 systemd[1]: Started sshd@14-49.13.31.190:22-80.94.95.115:18086.service - OpenSSH per-connection server daemon (80.94.95.115:18086). Jul 6 23:16:46.720848 sshd[4149]: Invalid user admin from 80.94.95.115 port 18086 Jul 6 23:16:46.782983 sshd[4149]: Connection closed by invalid user admin 80.94.95.115 port 18086 [preauth] Jul 6 23:16:46.786836 systemd[1]: sshd@14-49.13.31.190:22-80.94.95.115:18086.service: Deactivated successfully. Jul 6 23:17:42.928996 systemd[1]: Started sshd@15-49.13.31.190:22-139.178.89.65:58558.service - OpenSSH per-connection server daemon (139.178.89.65:58558). Jul 6 23:17:44.009041 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 58558 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:17:44.011373 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:17:44.021154 systemd-logind[1483]: New session 8 of user core. Jul 6 23:17:44.026852 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:17:44.851255 sshd[4162]: Connection closed by 139.178.89.65 port 58558 Jul 6 23:17:44.851810 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jul 6 23:17:44.857442 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:17:44.857867 systemd[1]: sshd@15-49.13.31.190:22-139.178.89.65:58558.service: Deactivated successfully. Jul 6 23:17:44.860433 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:17:44.863656 systemd-logind[1483]: Removed session 8. Jul 6 23:17:50.047092 systemd[1]: Started sshd@16-49.13.31.190:22-139.178.89.65:54218.service - OpenSSH per-connection server daemon (139.178.89.65:54218). Jul 6 23:17:51.132608 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 54218 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:17:51.134536 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:17:51.141350 systemd-logind[1483]: New session 9 of user core. Jul 6 23:17:51.147861 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:17:51.961745 sshd[4178]: Connection closed by 139.178.89.65 port 54218 Jul 6 23:17:51.960740 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jul 6 23:17:51.967130 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:17:51.968328 systemd[1]: sshd@16-49.13.31.190:22-139.178.89.65:54218.service: Deactivated successfully. Jul 6 23:17:51.972856 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:17:51.975273 systemd-logind[1483]: Removed session 9. Jul 6 23:17:57.152025 systemd[1]: Started sshd@17-49.13.31.190:22-139.178.89.65:54228.service - OpenSSH per-connection server daemon (139.178.89.65:54228). Jul 6 23:17:58.233196 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 54228 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:17:58.235386 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:17:58.243585 systemd-logind[1483]: New session 10 of user core. Jul 6 23:17:58.249895 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:17:59.061667 sshd[4195]: Connection closed by 139.178.89.65 port 54228 Jul 6 23:17:59.062673 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 6 23:17:59.067675 systemd[1]: sshd@17-49.13.31.190:22-139.178.89.65:54228.service: Deactivated successfully. Jul 6 23:17:59.070018 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:17:59.071706 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:17:59.073428 systemd-logind[1483]: Removed session 10. Jul 6 23:17:59.258897 systemd[1]: Started sshd@18-49.13.31.190:22-139.178.89.65:54242.service - OpenSSH per-connection server daemon (139.178.89.65:54242). Jul 6 23:18:00.368177 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 54242 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:00.370271 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:00.377475 systemd-logind[1483]: New session 11 of user core. Jul 6 23:18:00.384954 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:18:01.271070 sshd[4210]: Connection closed by 139.178.89.65 port 54242 Jul 6 23:18:01.271947 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:01.276710 systemd[1]: sshd@18-49.13.31.190:22-139.178.89.65:54242.service: Deactivated successfully. Jul 6 23:18:01.280867 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:18:01.282764 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:18:01.283869 systemd-logind[1483]: Removed session 11. Jul 6 23:18:01.467018 systemd[1]: Started sshd@19-49.13.31.190:22-139.178.89.65:60810.service - OpenSSH per-connection server daemon (139.178.89.65:60810). Jul 6 23:18:02.597459 sshd[4220]: Accepted publickey for core from 139.178.89.65 port 60810 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:02.599253 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:02.604308 systemd-logind[1483]: New session 12 of user core. Jul 6 23:18:02.612155 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:18:03.428306 sshd[4225]: Connection closed by 139.178.89.65 port 60810 Jul 6 23:18:03.430900 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:03.437710 systemd[1]: sshd@19-49.13.31.190:22-139.178.89.65:60810.service: Deactivated successfully. Jul 6 23:18:03.441405 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:18:03.442615 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:18:03.443609 systemd-logind[1483]: Removed session 12. Jul 6 23:18:08.624029 systemd[1]: Started sshd@20-49.13.31.190:22-139.178.89.65:60816.service - OpenSSH per-connection server daemon (139.178.89.65:60816). Jul 6 23:18:09.699218 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 60816 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:09.701774 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:09.707588 systemd-logind[1483]: New session 13 of user core. Jul 6 23:18:09.714817 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:18:10.517110 sshd[4238]: Connection closed by 139.178.89.65 port 60816 Jul 6 23:18:10.516937 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:10.524075 systemd[1]: sshd@20-49.13.31.190:22-139.178.89.65:60816.service: Deactivated successfully. Jul 6 23:18:10.526895 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:18:10.528678 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:18:10.530242 systemd-logind[1483]: Removed session 13. Jul 6 23:18:10.708950 systemd[1]: Started sshd@21-49.13.31.190:22-139.178.89.65:58740.service - OpenSSH per-connection server daemon (139.178.89.65:58740). Jul 6 23:18:11.791383 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 58740 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:11.793310 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:11.800100 systemd-logind[1483]: New session 14 of user core. Jul 6 23:18:11.809833 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:18:12.654425 sshd[4252]: Connection closed by 139.178.89.65 port 58740 Jul 6 23:18:12.655218 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:12.659962 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:18:12.660859 systemd[1]: sshd@21-49.13.31.190:22-139.178.89.65:58740.service: Deactivated successfully. Jul 6 23:18:12.664145 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:18:12.665842 systemd-logind[1483]: Removed session 14. Jul 6 23:18:12.859037 systemd[1]: Started sshd@22-49.13.31.190:22-139.178.89.65:58752.service - OpenSSH per-connection server daemon (139.178.89.65:58752). Jul 6 23:18:13.928195 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 58752 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:13.930138 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:13.937004 systemd-logind[1483]: New session 15 of user core. Jul 6 23:18:13.942904 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:18:15.731257 sshd[4264]: Connection closed by 139.178.89.65 port 58752 Jul 6 23:18:15.732385 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:15.738030 systemd[1]: sshd@22-49.13.31.190:22-139.178.89.65:58752.service: Deactivated successfully. Jul 6 23:18:15.741909 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:18:15.743175 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:18:15.744409 systemd-logind[1483]: Removed session 15. Jul 6 23:18:15.928562 systemd[1]: Started sshd@23-49.13.31.190:22-139.178.89.65:58760.service - OpenSSH per-connection server daemon (139.178.89.65:58760). Jul 6 23:18:17.011418 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 58760 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:17.013648 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:17.019418 systemd-logind[1483]: New session 16 of user core. Jul 6 23:18:17.027857 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:18:17.970935 sshd[4284]: Connection closed by 139.178.89.65 port 58760 Jul 6 23:18:17.970370 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:17.976208 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:18:17.977056 systemd[1]: sshd@23-49.13.31.190:22-139.178.89.65:58760.service: Deactivated successfully. Jul 6 23:18:17.980151 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:18:17.983353 systemd-logind[1483]: Removed session 16. Jul 6 23:18:18.169048 systemd[1]: Started sshd@24-49.13.31.190:22-139.178.89.65:58768.service - OpenSSH per-connection server daemon (139.178.89.65:58768). Jul 6 23:18:19.267443 sshd[4294]: Accepted publickey for core from 139.178.89.65 port 58768 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:19.269411 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:19.274707 systemd-logind[1483]: New session 17 of user core. Jul 6 23:18:19.285984 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:18:20.112619 sshd[4296]: Connection closed by 139.178.89.65 port 58768 Jul 6 23:18:20.113368 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:20.120458 systemd[1]: sshd@24-49.13.31.190:22-139.178.89.65:58768.service: Deactivated successfully. Jul 6 23:18:20.124115 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:18:20.126458 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:18:20.127695 systemd-logind[1483]: Removed session 17. Jul 6 23:18:21.832896 update_engine[1485]: I20250706 23:18:21.832495 1485 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:18:21.832896 update_engine[1485]: I20250706 23:18:21.832634 1485 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:18:21.832896 update_engine[1485]: I20250706 23:18:21.832868 1485 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:18:21.833374 update_engine[1485]: I20250706 23:18:21.833259 1485 omaha_request_params.cc:62] Current group set to stable Jul 6 23:18:21.833374 update_engine[1485]: I20250706 23:18:21.833353 1485 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:18:21.833374 update_engine[1485]: I20250706 23:18:21.833361 1485 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:18:21.833445 update_engine[1485]: I20250706 23:18:21.833377 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:18:21.833445 update_engine[1485]: I20250706 23:18:21.833413 1485 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:18:21.833494 update_engine[1485]: I20250706 23:18:21.833459 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:18:21.833494 update_engine[1485]: I20250706 23:18:21.833467 1485 omaha_request_action.cc:272] Request: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: Jul 6 23:18:21.833494 update_engine[1485]: I20250706 23:18:21.833474 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:18:21.835920 update_engine[1485]: I20250706 23:18:21.835850 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:18:21.836384 update_engine[1485]: I20250706 23:18:21.836283 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:18:21.836472 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:18:21.836931 update_engine[1485]: E20250706 23:18:21.836820 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:18:21.836931 update_engine[1485]: I20250706 23:18:21.836890 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:18:25.303258 systemd[1]: Started sshd@25-49.13.31.190:22-139.178.89.65:35402.service - OpenSSH per-connection server daemon (139.178.89.65:35402). Jul 6 23:18:26.396315 sshd[4310]: Accepted publickey for core from 139.178.89.65 port 35402 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:26.398507 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:26.404748 systemd-logind[1483]: New session 18 of user core. Jul 6 23:18:26.412837 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:18:27.218788 sshd[4312]: Connection closed by 139.178.89.65 port 35402 Jul 6 23:18:27.218475 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:27.224371 systemd[1]: sshd@25-49.13.31.190:22-139.178.89.65:35402.service: Deactivated successfully. Jul 6 23:18:27.229421 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:18:27.230621 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:18:27.231792 systemd-logind[1483]: Removed session 18. Jul 6 23:18:31.832180 update_engine[1485]: I20250706 23:18:31.832036 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:18:31.832670 update_engine[1485]: I20250706 23:18:31.832351 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:18:31.832793 update_engine[1485]: I20250706 23:18:31.832698 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:18:31.833191 update_engine[1485]: E20250706 23:18:31.833138 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:18:31.833246 update_engine[1485]: I20250706 23:18:31.833206 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:18:32.414095 systemd[1]: Started sshd@26-49.13.31.190:22-139.178.89.65:51224.service - OpenSSH per-connection server daemon (139.178.89.65:51224). Jul 6 23:18:33.498822 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 51224 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:33.500422 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:33.507349 systemd-logind[1483]: New session 19 of user core. Jul 6 23:18:33.512859 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:18:34.328808 sshd[4329]: Connection closed by 139.178.89.65 port 51224 Jul 6 23:18:34.327651 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:34.333110 systemd[1]: sshd@26-49.13.31.190:22-139.178.89.65:51224.service: Deactivated successfully. Jul 6 23:18:34.335837 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:18:34.337005 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:18:34.340232 systemd-logind[1483]: Removed session 19. Jul 6 23:18:34.522032 systemd[1]: Started sshd@27-49.13.31.190:22-139.178.89.65:51230.service - OpenSSH per-connection server daemon (139.178.89.65:51230). Jul 6 23:18:35.604298 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 51230 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:35.606282 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:35.612322 systemd-logind[1483]: New session 20 of user core. Jul 6 23:18:35.618583 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:18:38.258949 containerd[1504]: time="2025-07-06T23:18:38.258778842Z" level=info msg="StopContainer for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" with timeout 30 (s)" Jul 6 23:18:38.262555 containerd[1504]: time="2025-07-06T23:18:38.262440308Z" level=info msg="Stop container \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" with signal terminated" Jul 6 23:18:38.289021 containerd[1504]: time="2025-07-06T23:18:38.288971870Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:18:38.309078 containerd[1504]: time="2025-07-06T23:18:38.308702669Z" level=info msg="StopContainer for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" with timeout 2 (s)" Jul 6 23:18:38.309826 containerd[1504]: time="2025-07-06T23:18:38.309801609Z" level=info msg="Stop container \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" with signal terminated" Jul 6 23:18:38.316036 systemd[1]: cri-containerd-28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8.scope: Deactivated successfully. Jul 6 23:18:38.329054 systemd-networkd[1397]: lxc_health: Link DOWN Jul 6 23:18:38.329069 systemd-networkd[1397]: lxc_health: Lost carrier Jul 6 23:18:38.364059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8-rootfs.mount: Deactivated successfully. Jul 6 23:18:38.367153 systemd[1]: cri-containerd-3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7.scope: Deactivated successfully. Jul 6 23:18:38.368826 systemd[1]: cri-containerd-3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7.scope: Consumed 8.343s CPU time, 125.3M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:18:38.380603 containerd[1504]: time="2025-07-06T23:18:38.379938483Z" level=info msg="shim disconnected" id=28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8 namespace=k8s.io Jul 6 23:18:38.380603 containerd[1504]: time="2025-07-06T23:18:38.380300889Z" level=warning msg="cleaning up after shim disconnected" id=28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8 namespace=k8s.io Jul 6 23:18:38.380603 containerd[1504]: time="2025-07-06T23:18:38.380312490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:38.403956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7-rootfs.mount: Deactivated successfully. Jul 6 23:18:38.409738 containerd[1504]: time="2025-07-06T23:18:38.409608182Z" level=info msg="shim disconnected" id=3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7 namespace=k8s.io Jul 6 23:18:38.409738 containerd[1504]: time="2025-07-06T23:18:38.409672783Z" level=warning msg="cleaning up after shim disconnected" id=3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7 namespace=k8s.io Jul 6 23:18:38.409738 containerd[1504]: time="2025-07-06T23:18:38.409681063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:38.414821 containerd[1504]: time="2025-07-06T23:18:38.414673674Z" level=info msg="StopContainer for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" returns successfully" Jul 6 23:18:38.416192 containerd[1504]: time="2025-07-06T23:18:38.415943137Z" level=info msg="StopPodSandbox for \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\"" Jul 6 23:18:38.416192 containerd[1504]: time="2025-07-06T23:18:38.416020898Z" level=info msg="Container to stop \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:18:38.421207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f-shm.mount: Deactivated successfully. Jul 6 23:18:38.430460 systemd[1]: cri-containerd-eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f.scope: Deactivated successfully. Jul 6 23:18:38.446597 containerd[1504]: time="2025-07-06T23:18:38.445757119Z" level=info msg="StopContainer for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" returns successfully" Jul 6 23:18:38.447102 containerd[1504]: time="2025-07-06T23:18:38.447050622Z" level=info msg="StopPodSandbox for \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\"" Jul 6 23:18:38.447463 containerd[1504]: time="2025-07-06T23:18:38.447313667Z" level=info msg="Container to stop \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:18:38.447463 containerd[1504]: time="2025-07-06T23:18:38.447340868Z" level=info msg="Container to stop \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:18:38.448759 containerd[1504]: time="2025-07-06T23:18:38.448460288Z" level=info msg="Container to stop \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:18:38.448759 containerd[1504]: time="2025-07-06T23:18:38.448487968Z" level=info msg="Container to stop \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:18:38.448759 containerd[1504]: time="2025-07-06T23:18:38.448500129Z" level=info msg="Container to stop \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:18:38.450925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043-shm.mount: Deactivated successfully. Jul 6 23:18:38.464263 systemd[1]: cri-containerd-e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043.scope: Deactivated successfully. Jul 6 23:18:38.478366 containerd[1504]: time="2025-07-06T23:18:38.478095266Z" level=info msg="shim disconnected" id=eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f namespace=k8s.io Jul 6 23:18:38.478366 containerd[1504]: time="2025-07-06T23:18:38.478176788Z" level=warning msg="cleaning up after shim disconnected" id=eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f namespace=k8s.io Jul 6 23:18:38.478366 containerd[1504]: time="2025-07-06T23:18:38.478190108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:38.501850 containerd[1504]: time="2025-07-06T23:18:38.501794257Z" level=info msg="TearDown network for sandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" successfully" Jul 6 23:18:38.502641 containerd[1504]: time="2025-07-06T23:18:38.502565831Z" level=info msg="StopPodSandbox for \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" returns successfully" Jul 6 23:18:38.520794 containerd[1504]: time="2025-07-06T23:18:38.519781504Z" level=info msg="shim disconnected" id=e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043 namespace=k8s.io Jul 6 23:18:38.520794 containerd[1504]: time="2025-07-06T23:18:38.519860945Z" level=warning msg="cleaning up after shim disconnected" id=e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043 namespace=k8s.io Jul 6 23:18:38.520794 containerd[1504]: time="2025-07-06T23:18:38.519881705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:38.535248 containerd[1504]: time="2025-07-06T23:18:38.535091902Z" level=info msg="TearDown network for sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" successfully" Jul 6 23:18:38.535248 containerd[1504]: time="2025-07-06T23:18:38.535130423Z" level=info msg="StopPodSandbox for \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" returns successfully" Jul 6 23:18:38.624034 kubelet[2701]: I0706 23:18:38.622244 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-cgroup\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624034 kubelet[2701]: I0706 23:18:38.622309 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cni-path\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624034 kubelet[2701]: I0706 23:18:38.622341 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-lib-modules\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624034 kubelet[2701]: I0706 23:18:38.622382 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fskcw\" (UniqueName: \"kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-kube-api-access-fskcw\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624034 kubelet[2701]: I0706 23:18:38.622415 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-kernel\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624034 kubelet[2701]: I0706 23:18:38.622452 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/104def78-52ea-4efd-93f4-d3d940ae9b38-clustermesh-secrets\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624979 kubelet[2701]: I0706 23:18:38.622479 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-bpf-maps\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624979 kubelet[2701]: I0706 23:18:38.622550 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-config-path\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624979 kubelet[2701]: I0706 23:18:38.622580 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-hostproc\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624979 kubelet[2701]: I0706 23:18:38.622612 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdrq6\" (UniqueName: \"kubernetes.io/projected/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-kube-api-access-vdrq6\") pod \"b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db\" (UID: \"b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db\") " Jul 6 23:18:38.624979 kubelet[2701]: I0706 23:18:38.622647 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-net\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.624979 kubelet[2701]: I0706 23:18:38.622678 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-cilium-config-path\") pod \"b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db\" (UID: \"b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db\") " Jul 6 23:18:38.625260 kubelet[2701]: I0706 23:18:38.622707 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-etc-cni-netd\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.625260 kubelet[2701]: I0706 23:18:38.622745 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-hubble-tls\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.625260 kubelet[2701]: I0706 23:18:38.622795 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-xtables-lock\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.625260 kubelet[2701]: I0706 23:18:38.622831 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-run\") pod \"104def78-52ea-4efd-93f4-d3d940ae9b38\" (UID: \"104def78-52ea-4efd-93f4-d3d940ae9b38\") " Jul 6 23:18:38.625260 kubelet[2701]: I0706 23:18:38.622943 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.625260 kubelet[2701]: I0706 23:18:38.622999 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.625536 kubelet[2701]: I0706 23:18:38.623030 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cni-path" (OuterVolumeSpecName: "cni-path") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.625536 kubelet[2701]: I0706 23:18:38.623053 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.627551 kubelet[2701]: I0706 23:18:38.626483 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.628172 kubelet[2701]: I0706 23:18:38.628123 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.628499 kubelet[2701]: I0706 23:18:38.628470 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.628862 kubelet[2701]: I0706 23:18:38.628838 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-hostproc" (OuterVolumeSpecName: "hostproc") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.629008 kubelet[2701]: I0706 23:18:38.628972 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.629230 kubelet[2701]: I0706 23:18:38.629199 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:18:38.632231 kubelet[2701]: I0706 23:18:38.632192 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db" (UID: "b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:18:38.634230 kubelet[2701]: I0706 23:18:38.633943 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-kube-api-access-vdrq6" (OuterVolumeSpecName: "kube-api-access-vdrq6") pod "b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db" (UID: "b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db"). InnerVolumeSpecName "kube-api-access-vdrq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:18:38.634452 kubelet[2701]: I0706 23:18:38.634419 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:18:38.634679 kubelet[2701]: I0706 23:18:38.634024 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-kube-api-access-fskcw" (OuterVolumeSpecName: "kube-api-access-fskcw") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "kube-api-access-fskcw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:18:38.635139 kubelet[2701]: I0706 23:18:38.635114 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104def78-52ea-4efd-93f4-d3d940ae9b38-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:18:38.636171 kubelet[2701]: I0706 23:18:38.636126 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "104def78-52ea-4efd-93f4-d3d940ae9b38" (UID: "104def78-52ea-4efd-93f4-d3d940ae9b38"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:18:38.723713 kubelet[2701]: I0706 23:18:38.723646 2701 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-etc-cni-netd\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.723713 kubelet[2701]: I0706 23:18:38.723691 2701 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-net\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.723713 kubelet[2701]: I0706 23:18:38.723706 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-cilium-config-path\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.723713 kubelet[2701]: I0706 23:18:38.723721 2701 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-hubble-tls\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.723713 kubelet[2701]: I0706 23:18:38.723733 2701 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-xtables-lock\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723745 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-run\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723756 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-cgroup\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723765 2701 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-cni-path\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723799 2701 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-lib-modules\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723810 2701 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fskcw\" (UniqueName: \"kubernetes.io/projected/104def78-52ea-4efd-93f4-d3d940ae9b38-kube-api-access-fskcw\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723822 2701 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vdrq6\" (UniqueName: \"kubernetes.io/projected/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db-kube-api-access-vdrq6\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723833 2701 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-host-proc-sys-kernel\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724197 kubelet[2701]: I0706 23:18:38.723848 2701 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/104def78-52ea-4efd-93f4-d3d940ae9b38-clustermesh-secrets\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724650 kubelet[2701]: I0706 23:18:38.723858 2701 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-bpf-maps\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724650 kubelet[2701]: I0706 23:18:38.723870 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/104def78-52ea-4efd-93f4-d3d940ae9b38-cilium-config-path\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.724650 kubelet[2701]: I0706 23:18:38.723883 2701 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/104def78-52ea-4efd-93f4-d3d940ae9b38-hostproc\") on node \"ci-4230-2-1-3-0a35d13a56\" DevicePath \"\"" Jul 6 23:18:38.876483 kubelet[2701]: I0706 23:18:38.876350 2701 scope.go:117] "RemoveContainer" containerID="3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7" Jul 6 23:18:38.880735 containerd[1504]: time="2025-07-06T23:18:38.880493017Z" level=info msg="RemoveContainer for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\"" Jul 6 23:18:38.887052 containerd[1504]: time="2025-07-06T23:18:38.886374604Z" level=info msg="RemoveContainer for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" returns successfully" Jul 6 23:18:38.887629 systemd[1]: Removed slice kubepods-burstable-pod104def78_52ea_4efd_93f4_d3d940ae9b38.slice - libcontainer container kubepods-burstable-pod104def78_52ea_4efd_93f4_d3d940ae9b38.slice. Jul 6 23:18:38.887912 systemd[1]: kubepods-burstable-pod104def78_52ea_4efd_93f4_d3d940ae9b38.slice: Consumed 8.439s CPU time, 125.7M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:18:38.888969 kubelet[2701]: I0706 23:18:38.888311 2701 scope.go:117] "RemoveContainer" containerID="ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069" Jul 6 23:18:38.893692 systemd[1]: Removed slice kubepods-besteffort-podb7be8a7a_8777_4cd2_8363_3f5e9cd2b0db.slice - libcontainer container kubepods-besteffort-podb7be8a7a_8777_4cd2_8363_3f5e9cd2b0db.slice. Jul 6 23:18:38.896189 containerd[1504]: time="2025-07-06T23:18:38.896097301Z" level=info msg="RemoveContainer for \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\"" Jul 6 23:18:38.904456 containerd[1504]: time="2025-07-06T23:18:38.904400972Z" level=info msg="RemoveContainer for \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\" returns successfully" Jul 6 23:18:38.906097 kubelet[2701]: I0706 23:18:38.905679 2701 scope.go:117] "RemoveContainer" containerID="77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30" Jul 6 23:18:38.907566 containerd[1504]: time="2025-07-06T23:18:38.907450547Z" level=info msg="RemoveContainer for \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\"" Jul 6 23:18:38.912460 containerd[1504]: time="2025-07-06T23:18:38.912396797Z" level=info msg="RemoveContainer for \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\" returns successfully" Jul 6 23:18:38.914326 kubelet[2701]: I0706 23:18:38.914155 2701 scope.go:117] "RemoveContainer" containerID="6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031" Jul 6 23:18:38.916085 containerd[1504]: time="2025-07-06T23:18:38.915892421Z" level=info msg="RemoveContainer for \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\"" Jul 6 23:18:38.926833 containerd[1504]: time="2025-07-06T23:18:38.925967444Z" level=info msg="RemoveContainer for \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\" returns successfully" Jul 6 23:18:38.927951 kubelet[2701]: I0706 23:18:38.927923 2701 scope.go:117] "RemoveContainer" containerID="1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35" Jul 6 23:18:38.933342 containerd[1504]: time="2025-07-06T23:18:38.933299737Z" level=info msg="RemoveContainer for \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\"" Jul 6 23:18:38.938694 containerd[1504]: time="2025-07-06T23:18:38.938644434Z" level=info msg="RemoveContainer for \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\" returns successfully" Jul 6 23:18:38.939717 kubelet[2701]: I0706 23:18:38.939680 2701 scope.go:117] "RemoveContainer" containerID="3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7" Jul 6 23:18:38.940719 containerd[1504]: time="2025-07-06T23:18:38.940241583Z" level=error msg="ContainerStatus for \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\": not found" Jul 6 23:18:38.940931 kubelet[2701]: E0706 23:18:38.940551 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\": not found" containerID="3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7" Jul 6 23:18:38.940931 kubelet[2701]: I0706 23:18:38.940584 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7"} err="failed to get container status \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f313c91c29354e025562271c6139165e1bbd460d57c6f6213e0bd81cebc05e7\": not found" Jul 6 23:18:38.940931 kubelet[2701]: I0706 23:18:38.940669 2701 scope.go:117] "RemoveContainer" containerID="ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069" Jul 6 23:18:38.942107 containerd[1504]: time="2025-07-06T23:18:38.941683169Z" level=error msg="ContainerStatus for \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\": not found" Jul 6 23:18:38.942196 kubelet[2701]: E0706 23:18:38.941859 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\": not found" containerID="ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069" Jul 6 23:18:38.942196 kubelet[2701]: I0706 23:18:38.941902 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069"} err="failed to get container status \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac8e6579538cb29a81d27ffa7eba31f1e1b857bb83eb72f542cc1e1a3525c069\": not found" Jul 6 23:18:38.942196 kubelet[2701]: I0706 23:18:38.941920 2701 scope.go:117] "RemoveContainer" containerID="77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30" Jul 6 23:18:38.942285 containerd[1504]: time="2025-07-06T23:18:38.942166658Z" level=error msg="ContainerStatus for \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\": not found" Jul 6 23:18:38.942414 kubelet[2701]: E0706 23:18:38.942327 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\": not found" containerID="77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30" Jul 6 23:18:38.942414 kubelet[2701]: I0706 23:18:38.942354 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30"} err="failed to get container status \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\": rpc error: code = NotFound desc = an error occurred when try to find container \"77e45151578947b8133d23a376cdaef611a135f1dab6f7c7c63cf757c97eac30\": not found" Jul 6 23:18:38.942414 kubelet[2701]: I0706 23:18:38.942371 2701 scope.go:117] "RemoveContainer" containerID="6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031" Jul 6 23:18:38.943214 containerd[1504]: time="2025-07-06T23:18:38.942580265Z" level=error msg="ContainerStatus for \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\": not found" Jul 6 23:18:38.943294 kubelet[2701]: E0706 23:18:38.942947 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\": not found" containerID="6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031" Jul 6 23:18:38.943294 kubelet[2701]: I0706 23:18:38.942978 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031"} err="failed to get container status \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c2b38de0cef84170bf239afb7106304f3e661ef2eef628e396ecbad838f4031\": not found" Jul 6 23:18:38.943294 kubelet[2701]: I0706 23:18:38.943000 2701 scope.go:117] "RemoveContainer" containerID="1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35" Jul 6 23:18:38.943522 containerd[1504]: time="2025-07-06T23:18:38.943212117Z" level=error msg="ContainerStatus for \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\": not found" Jul 6 23:18:38.944958 kubelet[2701]: E0706 23:18:38.944325 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\": not found" containerID="1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35" Jul 6 23:18:38.944958 kubelet[2701]: I0706 23:18:38.944357 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35"} err="failed to get container status \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f478a3307cc11954e953a7767be285703fe17ff314b6e2f8772c9bbc99d5a35\": not found" Jul 6 23:18:38.944958 kubelet[2701]: I0706 23:18:38.944485 2701 scope.go:117] "RemoveContainer" containerID="28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8" Jul 6 23:18:38.946299 containerd[1504]: time="2025-07-06T23:18:38.946255572Z" level=info msg="RemoveContainer for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\"" Jul 6 23:18:38.949666 containerd[1504]: time="2025-07-06T23:18:38.949626393Z" level=info msg="RemoveContainer for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" returns successfully" Jul 6 23:18:38.950198 kubelet[2701]: I0706 23:18:38.950075 2701 scope.go:117] "RemoveContainer" containerID="28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8" Jul 6 23:18:38.950811 containerd[1504]: time="2025-07-06T23:18:38.950491929Z" level=error msg="ContainerStatus for \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\": not found" Jul 6 23:18:38.950889 kubelet[2701]: E0706 23:18:38.950732 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\": not found" containerID="28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8" Jul 6 23:18:38.950889 kubelet[2701]: I0706 23:18:38.950757 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8"} err="failed to get container status \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"28414dbd131fc934091e4811a62c317d0d2523c319c6e79b63596cffccac4cf8\": not found" Jul 6 23:18:39.249078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f-rootfs.mount: Deactivated successfully. Jul 6 23:18:39.249253 systemd[1]: var-lib-kubelet-pods-b7be8a7a\x2d8777\x2d4cd2\x2d8363\x2d3f5e9cd2b0db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdrq6.mount: Deactivated successfully. Jul 6 23:18:39.249364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043-rootfs.mount: Deactivated successfully. Jul 6 23:18:39.249457 systemd[1]: var-lib-kubelet-pods-104def78\x2d52ea\x2d4efd\x2d93f4\x2dd3d940ae9b38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfskcw.mount: Deactivated successfully. Jul 6 23:18:39.249567 systemd[1]: var-lib-kubelet-pods-104def78\x2d52ea\x2d4efd\x2d93f4\x2dd3d940ae9b38-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:18:39.249785 systemd[1]: var-lib-kubelet-pods-104def78\x2d52ea\x2d4efd\x2d93f4\x2dd3d940ae9b38-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:18:39.904141 kubelet[2701]: I0706 23:18:39.903755 2701 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104def78-52ea-4efd-93f4-d3d940ae9b38" path="/var/lib/kubelet/pods/104def78-52ea-4efd-93f4-d3d940ae9b38/volumes" Jul 6 23:18:39.904537 kubelet[2701]: I0706 23:18:39.904312 2701 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db" path="/var/lib/kubelet/pods/b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db/volumes" Jul 6 23:18:40.326660 sshd[4343]: Connection closed by 139.178.89.65 port 51230 Jul 6 23:18:40.327616 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:40.332654 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:18:40.334026 systemd[1]: sshd@27-49.13.31.190:22-139.178.89.65:51230.service: Deactivated successfully. Jul 6 23:18:40.338125 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:18:40.338602 systemd[1]: session-20.scope: Consumed 1.396s CPU time, 23.5M memory peak. Jul 6 23:18:40.340819 systemd-logind[1483]: Removed session 20. Jul 6 23:18:40.521916 systemd[1]: Started sshd@28-49.13.31.190:22-139.178.89.65:59414.service - OpenSSH per-connection server daemon (139.178.89.65:59414). Jul 6 23:18:41.083556 kubelet[2701]: E0706 23:18:41.083387 2701 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:18:41.634262 sshd[4509]: Accepted publickey for core from 139.178.89.65 port 59414 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:41.635422 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:41.642002 systemd-logind[1483]: New session 21 of user core. Jul 6 23:18:41.648849 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:18:41.832490 update_engine[1485]: I20250706 23:18:41.832371 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:18:41.832960 update_engine[1485]: I20250706 23:18:41.832722 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:18:41.833072 update_engine[1485]: I20250706 23:18:41.833014 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:18:41.833784 update_engine[1485]: E20250706 23:18:41.833684 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:18:41.833784 update_engine[1485]: I20250706 23:18:41.833750 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:18:42.643569 kubelet[2701]: I0706 23:18:42.643469 2701 setters.go:602] "Node became not ready" node="ci-4230-2-1-3-0a35d13a56" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:18:42Z","lastTransitionTime":"2025-07-06T23:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:18:43.106640 kubelet[2701]: I0706 23:18:43.105900 2701 memory_manager.go:355] "RemoveStaleState removing state" podUID="104def78-52ea-4efd-93f4-d3d940ae9b38" containerName="cilium-agent" Jul 6 23:18:43.106640 kubelet[2701]: I0706 23:18:43.105940 2701 memory_manager.go:355] "RemoveStaleState removing state" podUID="b7be8a7a-8777-4cd2-8363-3f5e9cd2b0db" containerName="cilium-operator" Jul 6 23:18:43.113903 systemd[1]: Created slice kubepods-burstable-poda4137bfe_3117_4cde_8cb0_4befa10555c4.slice - libcontainer container kubepods-burstable-poda4137bfe_3117_4cde_8cb0_4befa10555c4.slice. Jul 6 23:18:43.163251 kubelet[2701]: I0706 23:18:43.162788 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-cilium-run\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163251 kubelet[2701]: I0706 23:18:43.162857 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-bpf-maps\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163251 kubelet[2701]: I0706 23:18:43.162894 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4137bfe-3117-4cde-8cb0-4befa10555c4-cilium-config-path\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163251 kubelet[2701]: I0706 23:18:43.162915 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a4137bfe-3117-4cde-8cb0-4befa10555c4-cilium-ipsec-secrets\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163251 kubelet[2701]: I0706 23:18:43.162939 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-cilium-cgroup\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163251 kubelet[2701]: I0706 23:18:43.162957 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-host-proc-sys-kernel\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163585 kubelet[2701]: I0706 23:18:43.162980 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4137bfe-3117-4cde-8cb0-4befa10555c4-hubble-tls\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163585 kubelet[2701]: I0706 23:18:43.162999 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-xtables-lock\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163585 kubelet[2701]: I0706 23:18:43.163021 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7tvn\" (UniqueName: \"kubernetes.io/projected/a4137bfe-3117-4cde-8cb0-4befa10555c4-kube-api-access-v7tvn\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163585 kubelet[2701]: I0706 23:18:43.163044 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-hostproc\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163585 kubelet[2701]: I0706 23:18:43.163067 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-etc-cni-netd\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163585 kubelet[2701]: I0706 23:18:43.163085 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-lib-modules\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163739 kubelet[2701]: I0706 23:18:43.163105 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-cni-path\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163739 kubelet[2701]: I0706 23:18:43.163128 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4137bfe-3117-4cde-8cb0-4befa10555c4-clustermesh-secrets\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.163739 kubelet[2701]: I0706 23:18:43.163148 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4137bfe-3117-4cde-8cb0-4befa10555c4-host-proc-sys-net\") pod \"cilium-s9mkp\" (UID: \"a4137bfe-3117-4cde-8cb0-4befa10555c4\") " pod="kube-system/cilium-s9mkp" Jul 6 23:18:43.292944 sshd[4511]: Connection closed by 139.178.89.65 port 59414 Jul 6 23:18:43.296739 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:43.303855 systemd[1]: sshd@28-49.13.31.190:22-139.178.89.65:59414.service: Deactivated successfully. Jul 6 23:18:43.306859 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:18:43.308253 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:18:43.311113 systemd-logind[1483]: Removed session 21. Jul 6 23:18:43.422616 containerd[1504]: time="2025-07-06T23:18:43.422405234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9mkp,Uid:a4137bfe-3117-4cde-8cb0-4befa10555c4,Namespace:kube-system,Attempt:0,}" Jul 6 23:18:43.465343 containerd[1504]: time="2025-07-06T23:18:43.464891096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:18:43.465343 containerd[1504]: time="2025-07-06T23:18:43.464997418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:18:43.465343 containerd[1504]: time="2025-07-06T23:18:43.465014939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:18:43.466012 containerd[1504]: time="2025-07-06T23:18:43.465974596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:18:43.483874 systemd[1]: Started sshd@29-49.13.31.190:22-139.178.89.65:59416.service - OpenSSH per-connection server daemon (139.178.89.65:59416). Jul 6 23:18:43.487625 systemd[1]: Started cri-containerd-b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70.scope - libcontainer container b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70. Jul 6 23:18:43.520999 containerd[1504]: time="2025-07-06T23:18:43.520630722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9mkp,Uid:a4137bfe-3117-4cde-8cb0-4befa10555c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\"" Jul 6 23:18:43.525541 containerd[1504]: time="2025-07-06T23:18:43.525387810Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:18:43.538912 containerd[1504]: time="2025-07-06T23:18:43.538813977Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4\"" Jul 6 23:18:43.540544 containerd[1504]: time="2025-07-06T23:18:43.540417486Z" level=info msg="StartContainer for \"8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4\"" Jul 6 23:18:43.580005 systemd[1]: Started cri-containerd-8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4.scope - libcontainer container 8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4. Jul 6 23:18:43.618282 containerd[1504]: time="2025-07-06T23:18:43.618199078Z" level=info msg="StartContainer for \"8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4\" returns successfully" Jul 6 23:18:43.641261 systemd[1]: cri-containerd-8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4.scope: Deactivated successfully. Jul 6 23:18:43.682873 containerd[1504]: time="2025-07-06T23:18:43.682422460Z" level=info msg="shim disconnected" id=8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4 namespace=k8s.io Jul 6 23:18:43.682873 containerd[1504]: time="2025-07-06T23:18:43.682504301Z" level=warning msg="cleaning up after shim disconnected" id=8e80db26b657a3c306d9e4f640118c865db56b73217a27a3d8a60d62d9dc96b4 namespace=k8s.io Jul 6 23:18:43.682873 containerd[1504]: time="2025-07-06T23:18:43.682545062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:43.908273 containerd[1504]: time="2025-07-06T23:18:43.908065133Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:18:43.924982 containerd[1504]: time="2025-07-06T23:18:43.924921683Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce\"" Jul 6 23:18:43.925912 containerd[1504]: time="2025-07-06T23:18:43.925788019Z" level=info msg="StartContainer for \"7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce\"" Jul 6 23:18:43.963863 systemd[1]: Started cri-containerd-7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce.scope - libcontainer container 7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce. Jul 6 23:18:44.007794 containerd[1504]: time="2025-07-06T23:18:44.006897992Z" level=info msg="StartContainer for \"7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce\" returns successfully" Jul 6 23:18:44.016719 systemd[1]: cri-containerd-7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce.scope: Deactivated successfully. Jul 6 23:18:44.047071 containerd[1504]: time="2025-07-06T23:18:44.046978291Z" level=info msg="shim disconnected" id=7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce namespace=k8s.io Jul 6 23:18:44.047071 containerd[1504]: time="2025-07-06T23:18:44.047068333Z" level=warning msg="cleaning up after shim disconnected" id=7a534bb7659d05213671e00ed3391befa534b406098ad401614e5da523a66dce namespace=k8s.io Jul 6 23:18:44.047442 containerd[1504]: time="2025-07-06T23:18:44.047088773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:44.587424 sshd[4553]: Accepted publickey for core from 139.178.89.65 port 59416 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:44.589741 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:44.595554 systemd-logind[1483]: New session 22 of user core. Jul 6 23:18:44.602929 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:18:44.912322 containerd[1504]: time="2025-07-06T23:18:44.912082811Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:18:44.945474 containerd[1504]: time="2025-07-06T23:18:44.945403426Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c\"" Jul 6 23:18:44.947316 containerd[1504]: time="2025-07-06T23:18:44.947017296Z" level=info msg="StartContainer for \"6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c\"" Jul 6 23:18:44.985728 systemd[1]: Started cri-containerd-6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c.scope - libcontainer container 6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c. Jul 6 23:18:45.026724 containerd[1504]: time="2025-07-06T23:18:45.026348001Z" level=info msg="StartContainer for \"6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c\" returns successfully" Jul 6 23:18:45.032151 systemd[1]: cri-containerd-6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c.scope: Deactivated successfully. Jul 6 23:18:45.072834 containerd[1504]: time="2025-07-06T23:18:45.072721178Z" level=info msg="shim disconnected" id=6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c namespace=k8s.io Jul 6 23:18:45.072834 containerd[1504]: time="2025-07-06T23:18:45.072814660Z" level=warning msg="cleaning up after shim disconnected" id=6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c namespace=k8s.io Jul 6 23:18:45.072834 containerd[1504]: time="2025-07-06T23:18:45.072832220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:45.275117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6be907e5ca90ac53fd8f360658e56c775d6b6491c132999b0e73c2a96fc31e8c-rootfs.mount: Deactivated successfully. Jul 6 23:18:45.340393 sshd[4698]: Connection closed by 139.178.89.65 port 59416 Jul 6 23:18:45.341004 sshd-session[4553]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:45.344832 systemd[1]: sshd@29-49.13.31.190:22-139.178.89.65:59416.service: Deactivated successfully. Jul 6 23:18:45.347241 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:18:45.351043 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:18:45.353107 systemd-logind[1483]: Removed session 22. Jul 6 23:18:45.542040 systemd[1]: Started sshd@30-49.13.31.190:22-139.178.89.65:59420.service - OpenSSH per-connection server daemon (139.178.89.65:59420). Jul 6 23:18:45.918010 containerd[1504]: time="2025-07-06T23:18:45.917877248Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:18:45.944910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828196392.mount: Deactivated successfully. Jul 6 23:18:45.947104 containerd[1504]: time="2025-07-06T23:18:45.946790102Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36\"" Jul 6 23:18:45.948739 containerd[1504]: time="2025-07-06T23:18:45.947785481Z" level=info msg="StartContainer for \"bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36\"" Jul 6 23:18:45.995222 systemd[1]: Started cri-containerd-bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36.scope - libcontainer container bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36. Jul 6 23:18:46.028346 systemd[1]: cri-containerd-bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36.scope: Deactivated successfully. Jul 6 23:18:46.032571 containerd[1504]: time="2025-07-06T23:18:46.032390807Z" level=info msg="StartContainer for \"bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36\" returns successfully" Jul 6 23:18:46.066986 containerd[1504]: time="2025-07-06T23:18:46.066679723Z" level=info msg="shim disconnected" id=bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36 namespace=k8s.io Jul 6 23:18:46.066986 containerd[1504]: time="2025-07-06T23:18:46.066756164Z" level=warning msg="cleaning up after shim disconnected" id=bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36 namespace=k8s.io Jul 6 23:18:46.066986 containerd[1504]: time="2025-07-06T23:18:46.066771804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:18:46.084962 kubelet[2701]: E0706 23:18:46.084894 2701 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:18:46.273670 systemd[1]: run-containerd-runc-k8s.io-bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36-runc.3yQpO9.mount: Deactivated successfully. Jul 6 23:18:46.273801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbe800ae0c7b863c4e43dbe3b60a6bbf9a0d1b6164100d601b233fe997d3ad36-rootfs.mount: Deactivated successfully. Jul 6 23:18:46.637175 sshd[4763]: Accepted publickey for core from 139.178.89.65 port 59420 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:18:46.639364 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:18:46.646222 systemd-logind[1483]: New session 23 of user core. Jul 6 23:18:46.650814 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:18:46.926457 containerd[1504]: time="2025-07-06T23:18:46.926073453Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:18:46.949464 containerd[1504]: time="2025-07-06T23:18:46.949240482Z" level=info msg="CreateContainer within sandbox \"b12050d5025014216698641faace2729d8c8265ae55ebb0233931d3d22020b70\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5\"" Jul 6 23:18:46.950865 containerd[1504]: time="2025-07-06T23:18:46.950647868Z" level=info msg="StartContainer for \"1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5\"" Jul 6 23:18:46.991850 systemd[1]: Started cri-containerd-1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5.scope - libcontainer container 1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5. Jul 6 23:18:47.028117 containerd[1504]: time="2025-07-06T23:18:47.028056744Z" level=info msg="StartContainer for \"1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5\" returns successfully" Jul 6 23:18:47.489009 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:18:47.952606 kubelet[2701]: I0706 23:18:47.952213 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s9mkp" podStartSLOduration=4.952190074 podStartE2EDuration="4.952190074s" podCreationTimestamp="2025-07-06 23:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:18:47.950209157 +0000 UTC m=+352.181599723" watchObservedRunningTime="2025-07-06 23:18:47.952190074 +0000 UTC m=+352.183580600" Jul 6 23:18:50.577887 systemd-networkd[1397]: lxc_health: Link UP Jul 6 23:18:50.604588 systemd-networkd[1397]: lxc_health: Gained carrier Jul 6 23:18:51.837500 update_engine[1485]: I20250706 23:18:51.836598 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:18:51.837500 update_engine[1485]: I20250706 23:18:51.836864 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:18:51.837500 update_engine[1485]: I20250706 23:18:51.837127 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:18:51.839526 update_engine[1485]: E20250706 23:18:51.838312 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838373 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838382 1485 omaha_request_action.cc:617] Omaha request response: Jul 6 23:18:51.839526 update_engine[1485]: E20250706 23:18:51.838465 1485 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838485 1485 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838491 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838495 1485 update_attempter.cc:306] Processing Done. Jul 6 23:18:51.839526 update_engine[1485]: E20250706 23:18:51.838970 1485 update_attempter.cc:619] Update failed. Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838990 1485 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.838995 1485 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.839001 1485 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.839073 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.839097 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:18:51.839526 update_engine[1485]: I20250706 23:18:51.839103 1485 omaha_request_action.cc:272] Request: Jul 6 23:18:51.839526 update_engine[1485]: Jul 6 23:18:51.839526 update_engine[1485]: Jul 6 23:18:51.839526 update_engine[1485]: Jul 6 23:18:51.839976 update_engine[1485]: Jul 6 23:18:51.839976 update_engine[1485]: Jul 6 23:18:51.839976 update_engine[1485]: Jul 6 23:18:51.839976 update_engine[1485]: I20250706 23:18:51.839109 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:18:51.839976 update_engine[1485]: I20250706 23:18:51.839259 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:18:51.839976 update_engine[1485]: I20250706 23:18:51.839478 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:18:51.840596 update_engine[1485]: E20250706 23:18:51.840369 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840415 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840422 1485 omaha_request_action.cc:617] Omaha request response: Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840430 1485 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840435 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840438 1485 update_attempter.cc:306] Processing Done. Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840444 1485 update_attempter.cc:310] Error event sent. Jul 6 23:18:51.840596 update_engine[1485]: I20250706 23:18:51.840453 1485 update_check_scheduler.cc:74] Next update check in 41m49s Jul 6 23:18:51.841036 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:18:51.841036 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 6 23:18:52.192115 systemd-networkd[1397]: lxc_health: Gained IPv6LL Jul 6 23:18:53.776674 systemd[1]: run-containerd-runc-k8s.io-1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5-runc.iaD10z.mount: Deactivated successfully. Jul 6 23:18:54.849871 systemd[1]: Started sshd@31-49.13.31.190:22-82.146.42.154:48550.service - OpenSSH per-connection server daemon (82.146.42.154:48550). Jul 6 23:18:55.087414 sshd[5452]: Connection closed by authenticating user root 82.146.42.154 port 48550 [preauth] Jul 6 23:18:55.093461 systemd[1]: sshd@31-49.13.31.190:22-82.146.42.154:48550.service: Deactivated successfully. Jul 6 23:18:55.920067 containerd[1504]: time="2025-07-06T23:18:55.919921307Z" level=info msg="StopPodSandbox for \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\"" Jul 6 23:18:55.920067 containerd[1504]: time="2025-07-06T23:18:55.920031069Z" level=info msg="TearDown network for sandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" successfully" Jul 6 23:18:55.920067 containerd[1504]: time="2025-07-06T23:18:55.920044829Z" level=info msg="StopPodSandbox for \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" returns successfully" Jul 6 23:18:55.923914 containerd[1504]: time="2025-07-06T23:18:55.923028685Z" level=info msg="RemovePodSandbox for \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\"" Jul 6 23:18:55.923914 containerd[1504]: time="2025-07-06T23:18:55.923069486Z" level=info msg="Forcibly stopping sandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\"" Jul 6 23:18:55.923914 containerd[1504]: time="2025-07-06T23:18:55.923135967Z" level=info msg="TearDown network for sandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" successfully" Jul 6 23:18:55.928500 containerd[1504]: time="2025-07-06T23:18:55.928453988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:18:55.930305 containerd[1504]: time="2025-07-06T23:18:55.928948237Z" level=info msg="RemovePodSandbox \"eee0137e2f644c279974c4c69541f9ae085c4e810ad500dd5358ab359ed0458f\" returns successfully" Jul 6 23:18:55.931157 containerd[1504]: time="2025-07-06T23:18:55.930981956Z" level=info msg="StopPodSandbox for \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\"" Jul 6 23:18:55.931157 containerd[1504]: time="2025-07-06T23:18:55.931081518Z" level=info msg="TearDown network for sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" successfully" Jul 6 23:18:55.931157 containerd[1504]: time="2025-07-06T23:18:55.931095918Z" level=info msg="StopPodSandbox for \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" returns successfully" Jul 6 23:18:55.934579 containerd[1504]: time="2025-07-06T23:18:55.933891771Z" level=info msg="RemovePodSandbox for \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\"" Jul 6 23:18:55.934579 containerd[1504]: time="2025-07-06T23:18:55.933931012Z" level=info msg="Forcibly stopping sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\"" Jul 6 23:18:55.934579 containerd[1504]: time="2025-07-06T23:18:55.933991573Z" level=info msg="TearDown network for sandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" successfully" Jul 6 23:18:55.941310 containerd[1504]: time="2025-07-06T23:18:55.941151028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:18:55.942193 containerd[1504]: time="2025-07-06T23:18:55.942065525Z" level=info msg="RemovePodSandbox \"e8f146d015dc94751a8bd9bbf1e14aa055896f2a1a8974a8bdb583c12ce57043\" returns successfully" Jul 6 23:18:55.969464 systemd[1]: run-containerd-runc-k8s.io-1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5-runc.17FXKv.mount: Deactivated successfully. Jul 6 23:18:58.127292 systemd[1]: run-containerd-runc-k8s.io-1bb9bb2d8ff742759e8a41ce829ace21f65bbef471b600cf322f7f4c9751e8d5-runc.ybSJkb.mount: Deactivated successfully. Jul 6 23:18:58.376342 sshd[4822]: Connection closed by 139.178.89.65 port 59420 Jul 6 23:18:58.376865 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Jul 6 23:18:58.382821 systemd[1]: sshd@30-49.13.31.190:22-139.178.89.65:59420.service: Deactivated successfully. Jul 6 23:18:58.387596 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:18:58.392778 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:18:58.395093 systemd-logind[1483]: Removed session 23. Jul 6 23:19:13.679545 systemd[1]: cri-containerd-24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21.scope: Deactivated successfully. Jul 6 23:19:13.679983 systemd[1]: cri-containerd-24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21.scope: Consumed 5.756s CPU time, 55M memory peak. Jul 6 23:19:13.711095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21-rootfs.mount: Deactivated successfully. Jul 6 23:19:13.718622 containerd[1504]: time="2025-07-06T23:19:13.718505171Z" level=info msg="shim disconnected" id=24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21 namespace=k8s.io Jul 6 23:19:13.718622 containerd[1504]: time="2025-07-06T23:19:13.718625214Z" level=warning msg="cleaning up after shim disconnected" id=24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21 namespace=k8s.io Jul 6 23:19:13.719313 containerd[1504]: time="2025-07-06T23:19:13.718639654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:19:13.891306 kubelet[2701]: E0706 23:19:13.891059 2701 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40636->10.0.0.2:2379: read: connection timed out" Jul 6 23:19:13.898814 systemd[1]: cri-containerd-a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac.scope: Deactivated successfully. Jul 6 23:19:13.899105 systemd[1]: cri-containerd-a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac.scope: Consumed 5.902s CPU time, 24.3M memory peak. Jul 6 23:19:13.924382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac-rootfs.mount: Deactivated successfully. Jul 6 23:19:13.930430 containerd[1504]: time="2025-07-06T23:19:13.930234017Z" level=info msg="shim disconnected" id=a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac namespace=k8s.io Jul 6 23:19:13.930430 containerd[1504]: time="2025-07-06T23:19:13.930344059Z" level=warning msg="cleaning up after shim disconnected" id=a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac namespace=k8s.io Jul 6 23:19:13.930430 containerd[1504]: time="2025-07-06T23:19:13.930374620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:19:14.001571 kubelet[2701]: I0706 23:19:13.999774 2701 scope.go:117] "RemoveContainer" containerID="a7c6789518fdb3ca9479a5264eef02ef9cebba6e9869b5357f481334303a95ac" Jul 6 23:19:14.002530 containerd[1504]: time="2025-07-06T23:19:14.002328622Z" level=info msg="CreateContainer within sandbox \"61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:19:14.004250 kubelet[2701]: I0706 23:19:14.004166 2701 scope.go:117] "RemoveContainer" containerID="24d17caf70e162b35f9acefc9dd8871f2c86729c5141626999443213237d0b21" Jul 6 23:19:14.007053 containerd[1504]: time="2025-07-06T23:19:14.006891711Z" level=info msg="CreateContainer within sandbox \"31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:19:14.021183 containerd[1504]: time="2025-07-06T23:19:14.021041787Z" level=info msg="CreateContainer within sandbox \"61a56d8dc3ac0787b919f11fec100bb22f03378a8e3f1c395e68819a13d01da3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"82470add3452f122ded4e44231164e21da06450742b28d3b0ff74f94e69b2b45\"" Jul 6 23:19:14.023601 containerd[1504]: time="2025-07-06T23:19:14.021879603Z" level=info msg="StartContainer for \"82470add3452f122ded4e44231164e21da06450742b28d3b0ff74f94e69b2b45\"" Jul 6 23:19:14.037378 containerd[1504]: time="2025-07-06T23:19:14.037288944Z" level=info msg="CreateContainer within sandbox \"31fdc8755858593293c877867fce14788048aaa46aaca998d01e28a355436a94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6370319b32995527f82fe47331808af27aa96ba2c3776b9f884cab0e09a22b12\"" Jul 6 23:19:14.038799 containerd[1504]: time="2025-07-06T23:19:14.038759573Z" level=info msg="StartContainer for \"6370319b32995527f82fe47331808af27aa96ba2c3776b9f884cab0e09a22b12\"" Jul 6 23:19:14.061181 systemd[1]: Started cri-containerd-82470add3452f122ded4e44231164e21da06450742b28d3b0ff74f94e69b2b45.scope - libcontainer container 82470add3452f122ded4e44231164e21da06450742b28d3b0ff74f94e69b2b45. Jul 6 23:19:14.075763 systemd[1]: Started cri-containerd-6370319b32995527f82fe47331808af27aa96ba2c3776b9f884cab0e09a22b12.scope - libcontainer container 6370319b32995527f82fe47331808af27aa96ba2c3776b9f884cab0e09a22b12. Jul 6 23:19:14.113176 containerd[1504]: time="2025-07-06T23:19:14.113046902Z" level=info msg="StartContainer for \"82470add3452f122ded4e44231164e21da06450742b28d3b0ff74f94e69b2b45\" returns successfully" Jul 6 23:19:14.132729 containerd[1504]: time="2025-07-06T23:19:14.132663805Z" level=info msg="StartContainer for \"6370319b32995527f82fe47331808af27aa96ba2c3776b9f884cab0e09a22b12\" returns successfully" Jul 6 23:19:15.938892 kubelet[2701]: I0706 23:19:15.938776 2701 status_manager.go:890] "Failed to get status for pod" podUID="399a78aab121c00ba879ae339058e519" pod="kube-system/kube-scheduler-ci-4230-2-1-3-0a35d13a56" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40574->10.0.0.2:2379: read: connection timed out" Jul 6 23:19:17.410282 kubelet[2701]: E0706 23:19:17.410085 2701 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40464->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-1-3-0a35d13a56.184fccc9c1817f2c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-1-3-0a35d13a56,UID:92f9feb38e2b75a82349814e7923f075,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-3-0a35d13a56,},FirstTimestamp:2025-07-06 23:19:06.977394476 +0000 UTC m=+371.208785002,LastTimestamp:2025-07-06 23:19:06.977394476 +0000 UTC m=+371.208785002,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-3-0a35d13a56,}"