Oct 30 23:55:17.986984 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 30 23:55:17.987022 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Oct 30 22:19:25 -00 2025 Oct 30 23:55:17.987035 kernel: KASLR enabled Oct 30 23:55:17.987041 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Oct 30 23:55:17.987047 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Oct 30 23:55:17.987053 kernel: random: crng init done Oct 30 23:55:17.987060 kernel: secureboot: Secure boot disabled Oct 30 23:55:17.987066 kernel: ACPI: Early table checksum verification disabled Oct 30 23:55:17.987072 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Oct 30 23:55:17.987081 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 30 23:55:17.987088 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987093 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987099 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987106 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987113 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987122 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987129 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987135 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987141 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:55:17.987148 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 30 23:55:17.987154 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 30 23:55:17.987160 kernel: NUMA: Failed to initialise from firmware Oct 30 23:55:17.987166 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 30 23:55:17.987173 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Oct 30 23:55:17.987179 kernel: Zone ranges: Oct 30 23:55:17.987187 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 30 23:55:17.987193 kernel: DMA32 empty Oct 30 23:55:17.987200 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 30 23:55:17.987206 kernel: Movable zone start for each node Oct 30 23:55:17.987213 kernel: Early memory node ranges Oct 30 23:55:17.987294 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Oct 30 23:55:17.987304 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Oct 30 23:55:17.987310 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Oct 30 23:55:17.987317 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Oct 30 23:55:17.987323 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Oct 30 23:55:17.987329 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Oct 30 23:55:17.987335 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Oct 30 23:55:17.987349 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Oct 30 23:55:17.987356 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Oct 30 23:55:17.987363 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 30 23:55:17.987374 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 30 23:55:17.987382 kernel: psci: probing for conduit method from ACPI. Oct 30 23:55:17.987389 kernel: psci: PSCIv1.1 detected in firmware. Oct 30 23:55:17.987397 kernel: psci: Using standard PSCI v0.2 function IDs Oct 30 23:55:17.987404 kernel: psci: Trusted OS migration not required Oct 30 23:55:17.987411 kernel: psci: SMC Calling Convention v1.1 Oct 30 23:55:17.987417 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 30 23:55:17.987424 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Oct 30 23:55:17.987431 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Oct 30 23:55:17.987438 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 30 23:55:17.987445 kernel: Detected PIPT I-cache on CPU0 Oct 30 23:55:17.987477 kernel: CPU features: detected: GIC system register CPU interface Oct 30 23:55:17.987484 kernel: CPU features: detected: Hardware dirty bit management Oct 30 23:55:17.987497 kernel: CPU features: detected: Spectre-v4 Oct 30 23:55:17.987504 kernel: CPU features: detected: Spectre-BHB Oct 30 23:55:17.987511 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 30 23:55:17.987517 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 30 23:55:17.987524 kernel: CPU features: detected: ARM erratum 1418040 Oct 30 23:55:17.987531 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 30 23:55:17.987537 kernel: alternatives: applying boot alternatives Oct 30 23:55:17.987545 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=fa720f16dbb9986f34dd4402492c226087bd8d749299bbe02bbfafab6272d378 Oct 30 23:55:17.987553 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 23:55:17.987560 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 23:55:17.987566 kernel: Fallback order for Node 0: 0 Oct 30 23:55:17.987575 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 30 23:55:17.987582 kernel: Policy zone: Normal Oct 30 23:55:17.987589 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 23:55:17.987595 kernel: software IO TLB: area num 2. Oct 30 23:55:17.987601 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 30 23:55:17.987609 kernel: Memory: 3883764K/4096000K available (10368K kernel code, 2180K rwdata, 8104K rodata, 38400K init, 897K bss, 212236K reserved, 0K cma-reserved) Oct 30 23:55:17.987616 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 30 23:55:17.987622 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 23:55:17.987630 kernel: rcu: RCU event tracing is enabled. Oct 30 23:55:17.987638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 30 23:55:17.987644 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 23:55:17.987651 kernel: Tracing variant of Tasks RCU enabled. Oct 30 23:55:17.987661 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 23:55:17.987700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 30 23:55:17.987707 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 30 23:55:17.987713 kernel: GICv3: 256 SPIs implemented Oct 30 23:55:17.987720 kernel: GICv3: 0 Extended SPIs implemented Oct 30 23:55:17.987727 kernel: Root IRQ handler: gic_handle_irq Oct 30 23:55:17.987733 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 30 23:55:17.987740 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 30 23:55:17.987747 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 30 23:55:17.987754 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 30 23:55:17.987761 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 30 23:55:17.987774 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 30 23:55:17.987781 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 30 23:55:17.987788 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 23:55:17.987795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:55:17.987801 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 30 23:55:17.987808 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 30 23:55:17.987815 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 30 23:55:17.987822 kernel: Console: colour dummy device 80x25 Oct 30 23:55:17.987829 kernel: ACPI: Core revision 20230628 Oct 30 23:55:17.987837 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 30 23:55:17.987844 kernel: pid_max: default: 32768 minimum: 301 Oct 30 23:55:17.987853 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 30 23:55:17.987860 kernel: landlock: Up and running. Oct 30 23:55:17.987867 kernel: SELinux: Initializing. Oct 30 23:55:17.987874 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 23:55:17.987881 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 23:55:17.987891 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 23:55:17.987899 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 23:55:17.987909 kernel: rcu: Hierarchical SRCU implementation. Oct 30 23:55:17.987917 kernel: rcu: Max phase no-delay instances is 400. Oct 30 23:55:17.987927 kernel: Platform MSI: ITS@0x8080000 domain created Oct 30 23:55:17.987936 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 30 23:55:17.987943 kernel: Remapping and enabling EFI services. Oct 30 23:55:17.987950 kernel: smp: Bringing up secondary CPUs ... Oct 30 23:55:17.987957 kernel: Detected PIPT I-cache on CPU1 Oct 30 23:55:17.987965 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 30 23:55:17.987972 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 30 23:55:17.987980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:55:17.987987 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 30 23:55:17.987995 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 23:55:17.988002 kernel: SMP: Total of 2 processors activated. Oct 30 23:55:17.988017 kernel: CPU features: detected: 32-bit EL0 Support Oct 30 23:55:17.988027 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 30 23:55:17.988034 kernel: CPU features: detected: Common not Private translations Oct 30 23:55:17.988042 kernel: CPU features: detected: CRC32 instructions Oct 30 23:55:17.988097 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 30 23:55:17.988106 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 30 23:55:17.988117 kernel: CPU features: detected: LSE atomic instructions Oct 30 23:55:17.988124 kernel: CPU features: detected: Privileged Access Never Oct 30 23:55:17.988132 kernel: CPU features: detected: RAS Extension Support Oct 30 23:55:17.988139 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 30 23:55:17.988146 kernel: CPU: All CPU(s) started at EL1 Oct 30 23:55:17.988154 kernel: alternatives: applying system-wide alternatives Oct 30 23:55:17.988161 kernel: devtmpfs: initialized Oct 30 23:55:17.988169 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 23:55:17.988176 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 30 23:55:17.988186 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 23:55:17.988193 kernel: SMBIOS 3.0.0 present. Oct 30 23:55:17.988201 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 30 23:55:17.988208 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 23:55:17.988216 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 30 23:55:17.988234 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 30 23:55:17.988242 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 30 23:55:17.988249 kernel: audit: initializing netlink subsys (disabled) Oct 30 23:55:17.988256 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 23:55:17.988267 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Oct 30 23:55:17.988274 kernel: cpuidle: using governor menu Oct 30 23:55:17.988282 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 30 23:55:17.988289 kernel: ASID allocator initialised with 32768 entries Oct 30 23:55:17.988296 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 23:55:17.988303 kernel: Serial: AMBA PL011 UART driver Oct 30 23:55:17.988311 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 30 23:55:17.988319 kernel: Modules: 0 pages in range for non-PLT usage Oct 30 23:55:17.988327 kernel: Modules: 509248 pages in range for PLT usage Oct 30 23:55:17.988336 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 23:55:17.988344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 23:55:17.988351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 30 23:55:17.988358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 30 23:55:17.988365 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 23:55:17.988372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 23:55:17.988380 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 30 23:55:17.988387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 30 23:55:17.988394 kernel: ACPI: Added _OSI(Module Device) Oct 30 23:55:17.988404 kernel: ACPI: Added _OSI(Processor Device) Oct 30 23:55:17.988411 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 23:55:17.988419 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 23:55:17.988426 kernel: ACPI: Interpreter enabled Oct 30 23:55:17.988433 kernel: ACPI: Using GIC for interrupt routing Oct 30 23:55:17.988440 kernel: ACPI: MCFG table detected, 1 entries Oct 30 23:55:17.990757 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 30 23:55:17.990819 kernel: printk: console [ttyAMA0] enabled Oct 30 23:55:17.990828 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 23:55:17.991077 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 23:55:17.991164 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 30 23:55:17.991263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 30 23:55:17.991336 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 30 23:55:17.991401 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 30 23:55:17.991411 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 30 23:55:17.991420 kernel: PCI host bridge to bus 0000:00 Oct 30 23:55:17.991590 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 30 23:55:17.991673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 30 23:55:17.991740 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 30 23:55:17.991800 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 23:55:17.991895 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 30 23:55:17.991982 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 30 23:55:17.992063 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 30 23:55:17.992135 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 30 23:55:17.992216 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.992379 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 30 23:55:17.994423 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.994639 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 30 23:55:17.994737 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.994840 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 30 23:55:17.995045 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.995141 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 30 23:55:17.995247 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.995326 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 30 23:55:17.995418 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.995516 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 30 23:55:17.995603 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.995675 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 30 23:55:17.995770 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.995841 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 30 23:55:17.995916 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 30 23:55:17.995996 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 30 23:55:17.996079 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 30 23:55:17.996150 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Oct 30 23:55:17.996274 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 30 23:55:17.996350 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 30 23:55:17.996420 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 30 23:55:17.998735 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 30 23:55:17.998912 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 30 23:55:17.998989 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 30 23:55:17.999074 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 30 23:55:17.999147 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 30 23:55:17.999277 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 30 23:55:17.999391 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 30 23:55:17.999504 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 30 23:55:17.999605 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 30 23:55:17.999724 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Oct 30 23:55:17.999802 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 30 23:55:17.999891 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 30 23:55:17.999964 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 30 23:55:18.000045 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 30 23:55:18.000136 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 30 23:55:18.000209 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 30 23:55:18.000295 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 30 23:55:18.000371 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 30 23:55:18.002566 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 30 23:55:18.002751 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 30 23:55:18.002839 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 30 23:55:18.002921 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 30 23:55:18.004822 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 30 23:55:18.004966 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 30 23:55:18.005048 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 30 23:55:18.005119 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 30 23:55:18.005186 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 30 23:55:18.005310 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 30 23:55:18.005385 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 30 23:55:18.005479 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 30 23:55:18.005562 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 30 23:55:18.005632 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 30 23:55:18.005703 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Oct 30 23:55:18.005778 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 30 23:55:18.005857 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 30 23:55:18.005925 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 30 23:55:18.006004 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 30 23:55:18.006075 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 30 23:55:18.006144 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 30 23:55:18.006231 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 30 23:55:18.006305 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 30 23:55:18.006373 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 30 23:55:18.007496 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 30 23:55:18.007647 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 30 23:55:18.007717 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 30 23:55:18.007798 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 30 23:55:18.007876 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 30 23:55:18.007953 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 30 23:55:18.008022 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 30 23:55:18.008128 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 30 23:55:18.008209 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 30 23:55:18.008312 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 30 23:55:18.008382 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 30 23:55:18.009144 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 30 23:55:18.009386 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 30 23:55:18.009520 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 30 23:55:18.009606 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 30 23:55:18.009683 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 30 23:55:18.009754 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 30 23:55:18.009828 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 30 23:55:18.009899 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 30 23:55:18.009975 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 30 23:55:18.010050 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 30 23:55:18.010130 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 30 23:55:18.010202 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 30 23:55:18.010302 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 30 23:55:18.010376 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 30 23:55:18.010582 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 30 23:55:18.010671 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 30 23:55:18.010750 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 30 23:55:18.010827 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 30 23:55:18.010899 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 30 23:55:18.010967 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 30 23:55:18.011037 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 30 23:55:18.011104 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 30 23:55:18.011171 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 30 23:55:18.011256 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 30 23:55:18.011331 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 30 23:55:18.011404 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 30 23:55:18.011491 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 30 23:55:18.011561 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 30 23:55:18.011633 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 30 23:55:18.011702 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 30 23:55:18.011780 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 30 23:55:18.011864 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 30 23:55:18.011936 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 30 23:55:18.012012 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 30 23:55:18.012083 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 30 23:55:18.012152 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 30 23:55:18.012271 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 30 23:55:18.012364 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 30 23:55:18.012474 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 30 23:55:18.012567 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 30 23:55:18.014717 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 30 23:55:18.014854 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 30 23:55:18.014928 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 30 23:55:18.015018 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 30 23:55:18.015091 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 30 23:55:18.015185 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 30 23:55:18.015283 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 30 23:55:18.015361 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 30 23:55:18.015446 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 30 23:55:18.015563 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 30 23:55:18.015644 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 30 23:55:18.015714 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 30 23:55:18.015781 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 30 23:55:18.015864 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 30 23:55:18.015952 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 30 23:55:18.016025 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Oct 30 23:55:18.016098 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 30 23:55:18.016166 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 30 23:55:18.016250 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 30 23:55:18.016322 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 30 23:55:18.016405 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 30 23:55:18.016498 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 30 23:55:18.016580 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 30 23:55:18.016650 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 30 23:55:18.016717 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 30 23:55:18.016785 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 30 23:55:18.016985 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 30 23:55:18.017065 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 30 23:55:18.017139 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 30 23:55:18.017299 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 30 23:55:18.019520 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 30 23:55:18.019753 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 30 23:55:18.019829 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 30 23:55:18.019909 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 30 23:55:18.019979 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 30 23:55:18.020045 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 30 23:55:18.020113 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 30 23:55:18.020209 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 30 23:55:18.020349 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 30 23:55:18.020422 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 30 23:55:18.020522 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 30 23:55:18.020611 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 30 23:55:18.020675 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 30 23:55:18.020736 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 30 23:55:18.020831 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 30 23:55:18.020900 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 30 23:55:18.020964 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 30 23:55:18.021051 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 30 23:55:18.021114 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 30 23:55:18.021176 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 30 23:55:18.021273 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 30 23:55:18.021352 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 30 23:55:18.021433 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 30 23:55:18.023765 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 30 23:55:18.023878 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 30 23:55:18.023944 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 30 23:55:18.024024 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 30 23:55:18.024106 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 30 23:55:18.024173 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 30 23:55:18.024344 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 30 23:55:18.024416 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 30 23:55:18.026180 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 30 23:55:18.026422 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 30 23:55:18.026637 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 30 23:55:18.026705 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 30 23:55:18.026784 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 30 23:55:18.026848 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 30 23:55:18.026908 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 30 23:55:18.027003 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 30 23:55:18.027070 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 30 23:55:18.027133 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 30 23:55:18.027143 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 30 23:55:18.027151 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 30 23:55:18.027159 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 30 23:55:18.027167 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 30 23:55:18.027174 kernel: iommu: Default domain type: Translated Oct 30 23:55:18.027185 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 30 23:55:18.027193 kernel: efivars: Registered efivars operations Oct 30 23:55:18.027201 kernel: vgaarb: loaded Oct 30 23:55:18.027209 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 30 23:55:18.027229 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 23:55:18.027240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 23:55:18.027247 kernel: pnp: PnP ACPI init Oct 30 23:55:18.027354 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 30 23:55:18.027366 kernel: pnp: PnP ACPI: found 1 devices Oct 30 23:55:18.027377 kernel: NET: Registered PF_INET protocol family Oct 30 23:55:18.027385 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 23:55:18.027393 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 23:55:18.027402 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 23:55:18.027410 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 23:55:18.027418 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 23:55:18.027426 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 23:55:18.027435 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 23:55:18.027445 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 23:55:18.027469 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 23:55:18.027568 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 30 23:55:18.027582 kernel: PCI: CLS 0 bytes, default 64 Oct 30 23:55:18.027590 kernel: kvm [1]: HYP mode not available Oct 30 23:55:18.027597 kernel: Initialise system trusted keyrings Oct 30 23:55:18.027606 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 23:55:18.027613 kernel: Key type asymmetric registered Oct 30 23:55:18.027622 kernel: Asymmetric key parser 'x509' registered Oct 30 23:55:18.027634 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 23:55:18.027641 kernel: io scheduler mq-deadline registered Oct 30 23:55:18.027649 kernel: io scheduler kyber registered Oct 30 23:55:18.027657 kernel: io scheduler bfq registered Oct 30 23:55:18.027666 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 30 23:55:18.027748 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 30 23:55:18.027820 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 30 23:55:18.027889 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.027971 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 30 23:55:18.028051 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 30 23:55:18.028122 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.028200 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 30 23:55:18.028295 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 30 23:55:18.028367 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.029551 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 30 23:55:18.029755 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 30 23:55:18.029830 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.029912 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 30 23:55:18.029983 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 30 23:55:18.030055 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.030149 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 30 23:55:18.030237 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 30 23:55:18.030311 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.030392 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 30 23:55:18.030481 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 30 23:55:18.030553 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.030642 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 30 23:55:18.030715 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 30 23:55:18.030788 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.030799 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 30 23:55:18.030879 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 30 23:55:18.030949 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 30 23:55:18.031020 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 30 23:55:18.031031 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 30 23:55:18.031039 kernel: ACPI: button: Power Button [PWRB] Oct 30 23:55:18.031047 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 30 23:55:18.031130 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 30 23:55:18.031211 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 30 23:55:18.031236 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 23:55:18.031244 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 30 23:55:18.031326 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 30 23:55:18.031343 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 30 23:55:18.031351 kernel: thunder_xcv, ver 1.0 Oct 30 23:55:18.031358 kernel: thunder_bgx, ver 1.0 Oct 30 23:55:18.031366 kernel: nicpf, ver 1.0 Oct 30 23:55:18.031373 kernel: nicvf, ver 1.0 Oct 30 23:55:18.031629 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 30 23:55:18.031728 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-30T23:55:17 UTC (1761868517) Oct 30 23:55:18.031740 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 30 23:55:18.031756 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 30 23:55:18.031765 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 30 23:55:18.031772 kernel: watchdog: Hard watchdog permanently disabled Oct 30 23:55:18.031780 kernel: NET: Registered PF_INET6 protocol family Oct 30 23:55:18.031788 kernel: Segment Routing with IPv6 Oct 30 23:55:18.031796 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 23:55:18.031804 kernel: NET: Registered PF_PACKET protocol family Oct 30 23:55:18.031812 kernel: Key type dns_resolver registered Oct 30 23:55:18.031823 kernel: registered taskstats version 1 Oct 30 23:55:18.031833 kernel: Loading compiled-in X.509 certificates Oct 30 23:55:18.031841 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: aa1124814e36842ccda0ba5471ce49eeba345bb7' Oct 30 23:55:18.031849 kernel: Key type .fscrypt registered Oct 30 23:55:18.031857 kernel: Key type fscrypt-provisioning registered Oct 30 23:55:18.031865 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 23:55:18.031873 kernel: ima: Allocated hash algorithm: sha1 Oct 30 23:55:18.031881 kernel: ima: No architecture policies found Oct 30 23:55:18.031889 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 30 23:55:18.031896 kernel: clk: Disabling unused clocks Oct 30 23:55:18.031906 kernel: Freeing unused kernel memory: 38400K Oct 30 23:55:18.031914 kernel: Run /init as init process Oct 30 23:55:18.031923 kernel: with arguments: Oct 30 23:55:18.031931 kernel: /init Oct 30 23:55:18.031940 kernel: with environment: Oct 30 23:55:18.031947 kernel: HOME=/ Oct 30 23:55:18.031955 kernel: TERM=linux Oct 30 23:55:18.031965 systemd[1]: Successfully made /usr/ read-only. Oct 30 23:55:18.031977 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 23:55:18.031989 systemd[1]: Detected virtualization kvm. Oct 30 23:55:18.031996 systemd[1]: Detected architecture arm64. Oct 30 23:55:18.032004 systemd[1]: Running in initrd. Oct 30 23:55:18.032013 systemd[1]: No hostname configured, using default hostname. Oct 30 23:55:18.032022 systemd[1]: Hostname set to . Oct 30 23:55:18.032031 systemd[1]: Initializing machine ID from VM UUID. Oct 30 23:55:18.032039 systemd[1]: Queued start job for default target initrd.target. Oct 30 23:55:18.032050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:55:18.032059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:55:18.032068 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 23:55:18.032077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 23:55:18.032085 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 23:55:18.032094 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 23:55:18.032104 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 23:55:18.032116 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 23:55:18.032124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:55:18.032133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:55:18.032142 systemd[1]: Reached target paths.target - Path Units. Oct 30 23:55:18.032150 systemd[1]: Reached target slices.target - Slice Units. Oct 30 23:55:18.032159 systemd[1]: Reached target swap.target - Swaps. Oct 30 23:55:18.032168 systemd[1]: Reached target timers.target - Timer Units. Oct 30 23:55:18.032176 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 23:55:18.032187 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 23:55:18.032195 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 23:55:18.032204 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 23:55:18.032212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:55:18.032285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 23:55:18.032295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:55:18.032303 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 23:55:18.032312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 23:55:18.032320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 23:55:18.032332 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 23:55:18.032340 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 23:55:18.032349 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 23:55:18.032357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 23:55:18.032365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:18.032373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 23:55:18.032382 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:55:18.032393 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 23:55:18.032446 systemd-journald[237]: Collecting audit messages is disabled. Oct 30 23:55:18.032592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 23:55:18.032602 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:18.032611 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 23:55:18.032620 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:55:18.032628 kernel: Bridge firewalling registered Oct 30 23:55:18.032636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 23:55:18.032644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 23:55:18.032658 systemd-journald[237]: Journal started Oct 30 23:55:18.032679 systemd-journald[237]: Runtime Journal (/run/log/journal/5b616b9562d84f009cf51a71096b6b39) is 8M, max 76.6M, 68.6M free. Oct 30 23:55:18.003034 systemd-modules-load[238]: Inserted module 'overlay' Oct 30 23:55:18.026994 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 30 23:55:18.037440 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 23:55:18.045843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:55:18.060610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 23:55:18.065781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 23:55:18.072728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:55:18.082738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 23:55:18.087279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:55:18.101327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:55:18.109295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:55:18.117706 dracut-cmdline[268]: dracut-dracut-053 Oct 30 23:55:18.119934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 23:55:18.127081 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=fa720f16dbb9986f34dd4402492c226087bd8d749299bbe02bbfafab6272d378 Oct 30 23:55:18.176379 systemd-resolved[279]: Positive Trust Anchors: Oct 30 23:55:18.176405 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 23:55:18.176437 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 23:55:18.186887 systemd-resolved[279]: Defaulting to hostname 'linux'. Oct 30 23:55:18.188658 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 23:55:18.189485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:55:18.273503 kernel: SCSI subsystem initialized Oct 30 23:55:18.279546 kernel: Loading iSCSI transport class v2.0-870. Oct 30 23:55:18.289659 kernel: iscsi: registered transport (tcp) Oct 30 23:55:18.304527 kernel: iscsi: registered transport (qla4xxx) Oct 30 23:55:18.304652 kernel: QLogic iSCSI HBA Driver Oct 30 23:55:18.377621 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 23:55:18.384800 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 23:55:18.430494 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 23:55:18.431582 kernel: device-mapper: uevent: version 1.0.3 Oct 30 23:55:18.431597 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 30 23:55:18.493540 kernel: raid6: neonx8 gen() 15297 MB/s Oct 30 23:55:18.510536 kernel: raid6: neonx4 gen() 15449 MB/s Oct 30 23:55:18.527554 kernel: raid6: neonx2 gen() 13069 MB/s Oct 30 23:55:18.544553 kernel: raid6: neonx1 gen() 10287 MB/s Oct 30 23:55:18.561567 kernel: raid6: int64x8 gen() 6659 MB/s Oct 30 23:55:18.578517 kernel: raid6: int64x4 gen() 6723 MB/s Oct 30 23:55:18.595561 kernel: raid6: int64x2 gen() 5734 MB/s Oct 30 23:55:18.612537 kernel: raid6: int64x1 gen() 4867 MB/s Oct 30 23:55:18.612652 kernel: raid6: using algorithm neonx4 gen() 15449 MB/s Oct 30 23:55:18.629626 kernel: raid6: .... xor() 12114 MB/s, rmw enabled Oct 30 23:55:18.629760 kernel: raid6: using neon recovery algorithm Oct 30 23:55:18.636763 kernel: xor: measuring software checksum speed Oct 30 23:55:18.636901 kernel: 8regs : 21556 MB/sec Oct 30 23:55:18.636943 kernel: 32regs : 21704 MB/sec Oct 30 23:55:18.636961 kernel: arm64_neon : 27672 MB/sec Oct 30 23:55:18.637552 kernel: xor: using function: arm64_neon (27672 MB/sec) Oct 30 23:55:18.694553 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 23:55:18.717719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 23:55:18.726894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:55:18.761338 systemd-udevd[457]: Using default interface naming scheme 'v255'. Oct 30 23:55:18.768936 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:55:18.778085 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 23:55:18.811151 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Oct 30 23:55:18.866674 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 23:55:18.875880 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 23:55:18.953691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:55:18.964978 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 23:55:19.004748 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 23:55:19.008974 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 23:55:19.010574 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:55:19.011156 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 23:55:19.020879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 23:55:19.067148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 23:55:19.129614 kernel: scsi host0: Virtio SCSI HBA Oct 30 23:55:19.143922 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 30 23:55:19.144120 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 30 23:55:19.169749 kernel: ACPI: bus type USB registered Oct 30 23:55:19.169850 kernel: usbcore: registered new interface driver usbfs Oct 30 23:55:19.173855 kernel: usbcore: registered new interface driver hub Oct 30 23:55:19.175607 kernel: usbcore: registered new device driver usb Oct 30 23:55:19.198119 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 23:55:19.199487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:55:19.203711 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 30 23:55:19.204032 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 30 23:55:19.204132 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 23:55:19.201647 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:55:19.206156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:55:19.208694 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 30 23:55:19.206908 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:19.210945 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:19.219887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:19.223761 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:55:19.226428 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 30 23:55:19.226764 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 30 23:55:19.230642 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 30 23:55:19.232891 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 30 23:55:19.233210 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 30 23:55:19.233322 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 30 23:55:19.236698 kernel: hub 1-0:1.0: USB hub found Oct 30 23:55:19.237044 kernel: hub 1-0:1.0: 4 ports detected Oct 30 23:55:19.239501 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 30 23:55:19.240627 kernel: hub 2-0:1.0: USB hub found Oct 30 23:55:19.241474 kernel: hub 2-0:1.0: 4 ports detected Oct 30 23:55:19.244630 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 30 23:55:19.246749 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 30 23:55:19.247107 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 30 23:55:19.248522 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 30 23:55:19.248856 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 30 23:55:19.258846 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 23:55:19.258947 kernel: GPT:17805311 != 80003071 Oct 30 23:55:19.258958 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 23:55:19.258969 kernel: GPT:17805311 != 80003071 Oct 30 23:55:19.259585 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 23:55:19.259613 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:55:19.261954 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 30 23:55:19.261980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:19.268908 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:55:19.315107 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:55:19.371497 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (530) Oct 30 23:55:19.371590 kernel: BTRFS: device fsid 19e89659-6f9c-4c3c-9ebb-614770f236c4 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (512) Oct 30 23:55:19.388758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 30 23:55:19.399849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 30 23:55:19.410893 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 30 23:55:19.413076 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 30 23:55:19.423997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 30 23:55:19.440902 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 23:55:19.453808 disk-uuid[579]: Primary Header is updated. Oct 30 23:55:19.453808 disk-uuid[579]: Secondary Entries is updated. Oct 30 23:55:19.453808 disk-uuid[579]: Secondary Header is updated. Oct 30 23:55:19.461484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:55:19.478574 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 30 23:55:19.637507 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 30 23:55:19.639970 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 30 23:55:19.640318 kernel: usbcore: registered new interface driver usbhid Oct 30 23:55:19.640336 kernel: usbhid: USB HID core driver Oct 30 23:55:19.725508 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 30 23:55:19.862503 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 30 23:55:19.915612 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 30 23:55:20.482402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:55:20.485952 disk-uuid[580]: The operation has completed successfully. Oct 30 23:55:20.587007 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 23:55:20.587161 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 23:55:20.618887 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 23:55:20.626812 sh[594]: Success Oct 30 23:55:20.645762 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 30 23:55:20.766780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 23:55:20.770897 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 23:55:20.771781 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 23:55:20.814927 kernel: BTRFS info (device dm-0): first mount of filesystem 19e89659-6f9c-4c3c-9ebb-614770f236c4 Oct 30 23:55:20.815055 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:55:20.815073 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 30 23:55:20.815090 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 23:55:20.815780 kernel: BTRFS info (device dm-0): using free space tree Oct 30 23:55:20.825531 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 30 23:55:20.829004 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 23:55:20.832657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 23:55:20.839915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 23:55:20.844540 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 23:55:20.895504 kernel: BTRFS info (device sda6): first mount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:55:20.895618 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:55:20.895649 kernel: BTRFS info (device sda6): using free space tree Oct 30 23:55:20.905865 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 23:55:20.906001 kernel: BTRFS info (device sda6): auto enabling async discard Oct 30 23:55:20.914630 kernel: BTRFS info (device sda6): last unmount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:55:20.923919 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 23:55:20.933970 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 23:55:21.035126 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 23:55:21.044810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 23:55:21.084637 systemd-networkd[775]: lo: Link UP Oct 30 23:55:21.084648 systemd-networkd[775]: lo: Gained carrier Oct 30 23:55:21.087112 systemd-networkd[775]: Enumeration completed Oct 30 23:55:21.087802 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:21.087806 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:55:21.088237 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 23:55:21.088869 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:21.088874 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:55:21.089568 systemd-networkd[775]: eth0: Link UP Oct 30 23:55:21.089575 systemd-networkd[775]: eth0: Gained carrier Oct 30 23:55:21.089586 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:21.091766 systemd[1]: Reached target network.target - Network. Oct 30 23:55:21.098951 systemd-networkd[775]: eth1: Link UP Oct 30 23:55:21.098960 systemd-networkd[775]: eth1: Gained carrier Oct 30 23:55:21.098976 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:21.100245 ignition[687]: Ignition 2.20.0 Oct 30 23:55:21.100255 ignition[687]: Stage: fetch-offline Oct 30 23:55:21.100309 ignition[687]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:21.100320 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:21.104910 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 23:55:21.100568 ignition[687]: parsed url from cmdline: "" Oct 30 23:55:21.100572 ignition[687]: no config URL provided Oct 30 23:55:21.100579 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 23:55:21.100608 ignition[687]: no config at "/usr/lib/ignition/user.ign" Oct 30 23:55:21.100616 ignition[687]: failed to fetch config: resource requires networking Oct 30 23:55:21.100917 ignition[687]: Ignition finished successfully Oct 30 23:55:21.117201 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 30 23:55:21.129592 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 30 23:55:21.134197 ignition[783]: Ignition 2.20.0 Oct 30 23:55:21.134214 ignition[783]: Stage: fetch Oct 30 23:55:21.134519 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:21.134533 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:21.134647 ignition[783]: parsed url from cmdline: "" Oct 30 23:55:21.134650 ignition[783]: no config URL provided Oct 30 23:55:21.134655 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 23:55:21.134665 ignition[783]: no config at "/usr/lib/ignition/user.ign" Oct 30 23:55:21.134766 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 30 23:55:21.135798 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 30 23:55:21.156597 systemd-networkd[775]: eth0: DHCPv4 address 91.99.146.238/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 30 23:55:21.337074 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 30 23:55:21.347518 ignition[783]: GET result: OK Oct 30 23:55:21.347637 ignition[783]: parsing config with SHA512: eadc708330302c800a0d96f891d75da5ae3b8a02303526f8e9d7670113bff2e2f216877e9ba040e4bc3c0d80e7943fc2beddce786e1c32fba78670ec2680aadb Oct 30 23:55:21.356402 unknown[783]: fetched base config from "system" Oct 30 23:55:21.357034 ignition[783]: fetch: fetch complete Oct 30 23:55:21.356416 unknown[783]: fetched base config from "system" Oct 30 23:55:21.357042 ignition[783]: fetch: fetch passed Oct 30 23:55:21.356423 unknown[783]: fetched user config from "hetzner" Oct 30 23:55:21.357109 ignition[783]: Ignition finished successfully Oct 30 23:55:21.360211 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 30 23:55:21.374091 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 23:55:21.396191 ignition[791]: Ignition 2.20.0 Oct 30 23:55:21.396206 ignition[791]: Stage: kargs Oct 30 23:55:21.396560 ignition[791]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:21.399931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 23:55:21.396576 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:21.397883 ignition[791]: kargs: kargs passed Oct 30 23:55:21.397966 ignition[791]: Ignition finished successfully Oct 30 23:55:21.412959 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 23:55:21.430743 ignition[798]: Ignition 2.20.0 Oct 30 23:55:21.430756 ignition[798]: Stage: disks Oct 30 23:55:21.431033 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:21.431046 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:21.432728 ignition[798]: disks: disks passed Oct 30 23:55:21.432832 ignition[798]: Ignition finished successfully Oct 30 23:55:21.434860 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 23:55:21.437611 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 23:55:21.439171 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 23:55:21.440676 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 23:55:21.441668 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 23:55:21.442202 systemd[1]: Reached target basic.target - Basic System. Oct 30 23:55:21.448869 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 23:55:21.483921 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 30 23:55:21.491520 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 23:55:21.499343 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 23:55:21.579100 kernel: EXT4-fs (sda9): mounted filesystem 1621dc2d-b1da-466c-b741-5cdb5d67d58e r/w with ordered data mode. Quota mode: none. Oct 30 23:55:21.582517 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 23:55:21.584046 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 23:55:21.594768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 23:55:21.600759 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 23:55:21.604881 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 30 23:55:21.605715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 23:55:21.605771 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 23:55:21.628686 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 23:55:21.632514 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (815) Oct 30 23:55:21.636082 kernel: BTRFS info (device sda6): first mount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:55:21.636208 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:55:21.636943 kernel: BTRFS info (device sda6): using free space tree Oct 30 23:55:21.640896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 23:55:21.645832 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 23:55:21.645932 kernel: BTRFS info (device sda6): auto enabling async discard Oct 30 23:55:21.653228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 23:55:21.711026 coreos-metadata[817]: Oct 30 23:55:21.710 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 30 23:55:21.716655 coreos-metadata[817]: Oct 30 23:55:21.715 INFO Fetch successful Oct 30 23:55:21.718528 coreos-metadata[817]: Oct 30 23:55:21.718 INFO wrote hostname ci-4230-2-4-n-ab7d00e960 to /sysroot/etc/hostname Oct 30 23:55:21.729908 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 23:55:21.733700 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 23:55:21.743957 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Oct 30 23:55:21.753657 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 23:55:21.761075 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 23:55:21.911760 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 23:55:21.921764 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 23:55:21.925823 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 23:55:21.939541 kernel: BTRFS info (device sda6): last unmount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:55:21.941283 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 23:55:21.985592 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 23:55:21.991356 ignition[932]: INFO : Ignition 2.20.0 Oct 30 23:55:21.991356 ignition[932]: INFO : Stage: mount Oct 30 23:55:21.992985 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:21.992985 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:21.996792 ignition[932]: INFO : mount: mount passed Oct 30 23:55:21.996792 ignition[932]: INFO : Ignition finished successfully Oct 30 23:55:22.001191 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 23:55:22.009753 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 23:55:22.034919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 23:55:22.050172 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (943) Oct 30 23:55:22.050272 kernel: BTRFS info (device sda6): first mount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:55:22.050294 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:55:22.051465 kernel: BTRFS info (device sda6): using free space tree Oct 30 23:55:22.056315 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 23:55:22.056435 kernel: BTRFS info (device sda6): auto enabling async discard Oct 30 23:55:22.061377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 23:55:22.091111 ignition[960]: INFO : Ignition 2.20.0 Oct 30 23:55:22.092227 ignition[960]: INFO : Stage: files Oct 30 23:55:22.093602 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:22.093602 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:22.096525 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Oct 30 23:55:22.099003 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 23:55:22.099003 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 23:55:22.104628 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 23:55:22.105576 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 23:55:22.105576 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 23:55:22.105301 unknown[960]: wrote ssh authorized keys file for user: core Oct 30 23:55:22.110829 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 30 23:55:22.110829 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 30 23:55:22.238131 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 23:55:22.326479 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 30 23:55:22.328156 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 30 23:55:22.328156 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 30 23:55:22.448775 systemd-networkd[775]: eth0: Gained IPv6LL Oct 30 23:55:22.573786 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 30 23:55:22.772375 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 23:55:22.773672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 23:55:22.785971 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 23:55:22.785971 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 23:55:22.785971 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:55:22.785971 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:55:22.785971 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:55:22.785971 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 30 23:55:23.025804 systemd-networkd[775]: eth1: Gained IPv6LL Oct 30 23:55:23.122550 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 30 23:55:24.423248 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 30 23:55:24.423248 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 23:55:24.439758 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 23:55:24.439758 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 23:55:24.439758 ignition[960]: INFO : files: files passed Oct 30 23:55:24.439758 ignition[960]: INFO : Ignition finished successfully Oct 30 23:55:24.428517 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 23:55:24.440912 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 23:55:24.447887 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 23:55:24.452672 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 23:55:24.453003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 23:55:24.476764 initrd-setup-root-after-ignition[993]: grep: Oct 30 23:55:24.476764 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:55:24.476764 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:55:24.479859 initrd-setup-root-after-ignition[993]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:55:24.484269 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 23:55:24.486206 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 23:55:24.493859 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 23:55:24.541889 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 23:55:24.542087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 23:55:24.543794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 23:55:24.545040 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 23:55:24.546497 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 23:55:24.553257 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 23:55:24.578816 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 23:55:24.592321 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 23:55:24.605303 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:55:24.607390 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:55:24.609343 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 23:55:24.611031 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 23:55:24.611423 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 23:55:24.613836 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 23:55:24.615014 systemd[1]: Stopped target basic.target - Basic System. Oct 30 23:55:24.616003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 23:55:24.617089 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 23:55:24.618321 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 23:55:24.619391 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 23:55:24.620624 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 23:55:24.621700 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 23:55:24.622740 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 23:55:24.623754 systemd[1]: Stopped target swap.target - Swaps. Oct 30 23:55:24.624611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 23:55:24.624839 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 23:55:24.626156 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:55:24.627368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:55:24.628501 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 23:55:24.631585 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:55:24.633576 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 23:55:24.633793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 23:55:24.636839 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 23:55:24.637882 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 23:55:24.640984 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 23:55:24.641425 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 23:55:24.642794 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 30 23:55:24.643014 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 23:55:24.652004 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 23:55:24.653185 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 23:55:24.654760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:55:24.672231 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 23:55:24.673051 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 23:55:24.673384 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:55:24.676976 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 23:55:24.677196 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 23:55:24.690089 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 23:55:24.690250 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 23:55:24.700991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 23:55:24.714640 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 23:55:24.715818 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 23:55:24.721370 ignition[1013]: INFO : Ignition 2.20.0 Oct 30 23:55:24.721370 ignition[1013]: INFO : Stage: umount Oct 30 23:55:24.721370 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:55:24.721370 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 30 23:55:24.728363 ignition[1013]: INFO : umount: umount passed Oct 30 23:55:24.728363 ignition[1013]: INFO : Ignition finished successfully Oct 30 23:55:24.729489 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 23:55:24.729666 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 23:55:24.733497 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 23:55:24.733678 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 23:55:24.734706 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 23:55:24.734780 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 23:55:24.735774 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 30 23:55:24.735837 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 30 23:55:24.736896 systemd[1]: Stopped target network.target - Network. Oct 30 23:55:24.737887 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 23:55:24.737980 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 23:55:24.739229 systemd[1]: Stopped target paths.target - Path Units. Oct 30 23:55:24.740673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 23:55:24.745189 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:55:24.746248 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 23:55:24.749025 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 23:55:24.750030 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 23:55:24.750140 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 23:55:24.751146 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 23:55:24.751188 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 23:55:24.752277 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 23:55:24.752397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 23:55:24.753426 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 23:55:24.753499 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 23:55:24.754683 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 23:55:24.754756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 23:55:24.756527 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 23:55:24.758009 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 23:55:24.767536 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 23:55:24.767695 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 23:55:24.774178 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 23:55:24.774638 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 23:55:24.774776 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 23:55:24.777814 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 23:55:24.779908 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 23:55:24.779994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:55:24.790498 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 23:55:24.791581 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 23:55:24.791705 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 23:55:24.792757 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 23:55:24.792845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:55:24.793596 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 23:55:24.793658 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 23:55:24.794344 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 23:55:24.794417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:55:24.797239 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:55:24.802782 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 23:55:24.802923 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:55:24.830799 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 23:55:24.831181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:55:24.833040 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 23:55:24.833165 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 23:55:24.836125 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 23:55:24.836195 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:55:24.837128 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 23:55:24.837214 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 23:55:24.838996 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 23:55:24.839085 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 23:55:24.840780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 23:55:24.840871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:55:24.855025 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 23:55:24.857385 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 23:55:24.857556 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:55:24.858720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:55:24.858790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:24.861932 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 23:55:24.862036 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:55:24.862660 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 23:55:24.862823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 23:55:24.869226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 23:55:24.870536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 23:55:24.873056 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 23:55:24.878820 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 23:55:24.902712 systemd[1]: Switching root. Oct 30 23:55:24.945364 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 30 23:55:24.947423 systemd-journald[237]: Journal stopped Oct 30 23:55:26.263873 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 23:55:26.263986 kernel: SELinux: policy capability open_perms=1 Oct 30 23:55:26.264004 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 23:55:26.264024 kernel: SELinux: policy capability always_check_network=0 Oct 30 23:55:26.264035 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 23:55:26.264058 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 23:55:26.264081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 23:55:26.264093 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 23:55:26.264110 kernel: audit: type=1403 audit(1761868525.080:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 23:55:26.264135 systemd[1]: Successfully loaded SELinux policy in 49.268ms. Oct 30 23:55:26.264169 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.532ms. Oct 30 23:55:26.264181 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 23:55:26.264193 systemd[1]: Detected virtualization kvm. Oct 30 23:55:26.264204 systemd[1]: Detected architecture arm64. Oct 30 23:55:26.264215 systemd[1]: Detected first boot. Oct 30 23:55:26.264226 systemd[1]: Hostname set to . Oct 30 23:55:26.264239 systemd[1]: Initializing machine ID from VM UUID. Oct 30 23:55:26.264249 zram_generator::config[1058]: No configuration found. Oct 30 23:55:26.264262 kernel: NET: Registered PF_VSOCK protocol family Oct 30 23:55:26.264273 systemd[1]: Populated /etc with preset unit settings. Oct 30 23:55:26.264286 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 23:55:26.264297 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 23:55:26.264308 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 23:55:26.264318 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 23:55:26.264336 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 23:55:26.264355 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 23:55:26.264365 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 23:55:26.264376 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 23:55:26.264387 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 23:55:26.264398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 23:55:26.264409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 23:55:26.264419 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 23:55:26.264429 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:55:26.264443 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:55:26.265773 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 23:55:26.265819 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 23:55:26.265833 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 23:55:26.265845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 23:55:26.265857 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 30 23:55:26.265868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:55:26.265880 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 23:55:26.265906 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 23:55:26.265917 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 23:55:26.265928 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 23:55:26.265946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:55:26.265957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 23:55:26.265969 systemd[1]: Reached target slices.target - Slice Units. Oct 30 23:55:26.265979 systemd[1]: Reached target swap.target - Swaps. Oct 30 23:55:26.265990 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 23:55:26.266003 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 23:55:26.266013 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 23:55:26.266024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:55:26.266039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 23:55:26.266054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:55:26.266087 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 23:55:26.266104 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 23:55:26.266115 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 23:55:26.266125 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 23:55:26.266136 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 23:55:26.266147 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 23:55:26.266157 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 23:55:26.266169 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 23:55:26.266180 systemd[1]: Reached target machines.target - Containers. Oct 30 23:55:26.266192 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 23:55:26.266203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:26.266215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 23:55:26.266226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 23:55:26.266237 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:55:26.266248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 23:55:26.266258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:55:26.266268 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 23:55:26.266279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:55:26.266292 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 23:55:26.266303 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 23:55:26.266314 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 23:55:26.266325 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 23:55:26.266336 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 23:55:26.266349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:26.266360 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 23:55:26.266370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 23:55:26.266384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 23:55:26.266396 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 23:55:26.266407 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 23:55:26.266419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 23:55:26.266430 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 23:55:26.266443 systemd[1]: Stopped verity-setup.service. Oct 30 23:55:26.266474 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 23:55:26.266487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 23:55:26.266498 kernel: fuse: init (API version 7.39) Oct 30 23:55:26.266510 kernel: loop: module loaded Oct 30 23:55:26.266520 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 23:55:26.266594 systemd-journald[1126]: Collecting audit messages is disabled. Oct 30 23:55:26.266630 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 23:55:26.266641 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 23:55:26.266652 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 23:55:26.266666 systemd-journald[1126]: Journal started Oct 30 23:55:26.266694 systemd-journald[1126]: Runtime Journal (/run/log/journal/5b616b9562d84f009cf51a71096b6b39) is 8M, max 76.6M, 68.6M free. Oct 30 23:55:25.913669 systemd[1]: Queued start job for default target multi-user.target. Oct 30 23:55:25.927696 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 30 23:55:26.273971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:55:26.274054 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 23:55:25.928412 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 23:55:26.281625 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 23:55:26.281890 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 23:55:26.283523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:55:26.283788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:55:26.286298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:55:26.287245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:55:26.289431 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 23:55:26.289997 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 23:55:26.292326 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:55:26.293888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:55:26.295659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 23:55:26.297202 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 23:55:26.326876 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 23:55:26.340877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 23:55:26.343222 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 23:55:26.343324 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 23:55:26.349308 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 23:55:26.353695 kernel: ACPI: bus type drm_connector registered Oct 30 23:55:26.357041 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 23:55:26.362820 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 23:55:26.363814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:26.375940 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 23:55:26.385841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 23:55:26.388113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:55:26.393408 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 23:55:26.394912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:55:26.397890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:55:26.403822 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 23:55:26.407240 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 23:55:26.408369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 23:55:26.410616 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 23:55:26.413219 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 23:55:26.416292 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 23:55:26.418722 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 23:55:26.422336 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 23:55:26.431607 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 23:55:26.454557 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 23:55:26.476724 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 23:55:26.490555 systemd-journald[1126]: Time spent on flushing to /var/log/journal/5b616b9562d84f009cf51a71096b6b39 is 111.565ms for 1141 entries. Oct 30 23:55:26.490555 systemd-journald[1126]: System Journal (/var/log/journal/5b616b9562d84f009cf51a71096b6b39) is 8M, max 584.8M, 576.8M free. Oct 30 23:55:26.633562 systemd-journald[1126]: Received client request to flush runtime journal. Oct 30 23:55:26.633664 kernel: loop0: detected capacity change from 0 to 113512 Oct 30 23:55:26.633712 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 23:55:26.519043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 23:55:26.522144 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 23:55:26.541248 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 23:55:26.594086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:55:26.639582 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 23:55:26.647756 kernel: loop1: detected capacity change from 0 to 207008 Oct 30 23:55:26.650124 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 23:55:26.662482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:55:26.676705 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 30 23:55:26.680130 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 23:55:26.699296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 23:55:26.725860 kernel: loop2: detected capacity change from 0 to 123192 Oct 30 23:55:26.729599 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 30 23:55:26.781502 kernel: loop3: detected capacity change from 0 to 8 Oct 30 23:55:26.780085 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 30 23:55:26.780107 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 30 23:55:26.798151 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:55:26.821509 kernel: loop4: detected capacity change from 0 to 113512 Oct 30 23:55:26.854911 kernel: loop5: detected capacity change from 0 to 207008 Oct 30 23:55:26.892504 kernel: loop6: detected capacity change from 0 to 123192 Oct 30 23:55:26.927559 kernel: loop7: detected capacity change from 0 to 8 Oct 30 23:55:26.928968 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 30 23:55:26.931109 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 23:55:26.932254 (sd-merge)[1205]: Merged extensions into '/usr'. Oct 30 23:55:26.943166 systemd[1]: Reload requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 23:55:26.943195 systemd[1]: Reloading... Oct 30 23:55:27.071030 zram_generator::config[1233]: No configuration found. Oct 30 23:55:27.332864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:55:27.451634 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 23:55:27.452759 systemd[1]: Reloading finished in 508 ms. Oct 30 23:55:27.475706 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 23:55:27.481528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 23:55:27.494873 systemd[1]: Starting ensure-sysext.service... Oct 30 23:55:27.511931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 23:55:27.546810 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Oct 30 23:55:27.546856 systemd[1]: Reloading... Oct 30 23:55:27.571758 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 23:55:27.572029 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 23:55:27.574296 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 23:55:27.575903 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Oct 30 23:55:27.575974 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Oct 30 23:55:27.592376 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 23:55:27.592398 systemd-tmpfiles[1271]: Skipping /boot Oct 30 23:55:27.618181 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 23:55:27.618203 systemd-tmpfiles[1271]: Skipping /boot Oct 30 23:55:27.764537 zram_generator::config[1321]: No configuration found. Oct 30 23:55:27.866107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:55:27.933413 systemd[1]: Reloading finished in 386 ms. Oct 30 23:55:27.952657 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 23:55:27.983298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:55:28.003181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:55:28.011128 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 23:55:28.027496 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 23:55:28.035709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 23:55:28.043244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:55:28.050729 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 23:55:28.055118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:28.065443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:55:28.087140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:55:28.095868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:55:28.098949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:28.099192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:28.108728 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 23:55:28.112234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:28.112498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:28.112596 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:28.120113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:28.128074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 23:55:28.130347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:28.130608 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:28.134529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 23:55:28.145704 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:55:28.146744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:55:28.151499 systemd[1]: Finished ensure-sysext.service. Oct 30 23:55:28.158882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:55:28.159212 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:55:28.163879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:55:28.166651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:55:28.191671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:55:28.191990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:55:28.203395 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 23:55:28.217426 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 23:55:28.217866 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Oct 30 23:55:28.221246 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 23:55:28.221698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 23:55:28.234116 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 23:55:28.286591 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 23:55:28.303090 augenrules[1381]: No rules Oct 30 23:55:28.307166 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:55:28.307573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:55:28.318203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 23:55:28.328942 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 23:55:28.330360 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 23:55:28.350294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:55:28.362938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 23:55:28.469417 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 23:55:28.474812 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 23:55:28.535892 systemd-resolved[1348]: Positive Trust Anchors: Oct 30 23:55:28.538337 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 23:55:28.538386 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 23:55:28.550310 systemd-resolved[1348]: Using system hostname 'ci-4230-2-4-n-ab7d00e960'. Oct 30 23:55:28.554355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 23:55:28.555426 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:55:28.578184 systemd-networkd[1396]: lo: Link UP Oct 30 23:55:28.578201 systemd-networkd[1396]: lo: Gained carrier Oct 30 23:55:28.579295 systemd-networkd[1396]: Enumeration completed Oct 30 23:55:28.579495 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 23:55:28.580354 systemd[1]: Reached target network.target - Network. Oct 30 23:55:28.588917 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 23:55:28.592808 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 23:55:28.613562 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 30 23:55:28.638795 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 23:55:28.726094 systemd-networkd[1396]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:28.726111 systemd-networkd[1396]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:55:28.728714 systemd-networkd[1396]: eth1: Link UP Oct 30 23:55:28.728723 systemd-networkd[1396]: eth1: Gained carrier Oct 30 23:55:28.728757 systemd-networkd[1396]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:28.756694 systemd-networkd[1396]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 30 23:55:28.758786 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 30 23:55:28.795540 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 23:55:28.828810 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1407) Oct 30 23:55:28.845090 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:28.845109 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:55:28.847572 systemd-networkd[1396]: eth0: Link UP Oct 30 23:55:28.847587 systemd-networkd[1396]: eth0: Gained carrier Oct 30 23:55:28.847630 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:28.847997 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 30 23:55:28.870945 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 30 23:55:28.926374 systemd-networkd[1396]: eth0: DHCPv4 address 91.99.146.238/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 30 23:55:28.928477 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 30 23:55:28.934360 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 30 23:55:28.934656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:28.948939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:55:28.954243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:55:28.958842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:55:28.959685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:28.959747 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:28.959781 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 23:55:28.960382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:55:28.961867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:55:29.005965 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:55:29.006287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:55:29.010287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:55:29.010864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:55:29.020285 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:55:29.020394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:55:29.040910 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 30 23:55:29.041166 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 30 23:55:29.041188 kernel: [drm] features: -context_init Oct 30 23:55:29.052501 kernel: [drm] number of scanouts: 1 Oct 30 23:55:29.052663 kernel: [drm] number of cap sets: 0 Oct 30 23:55:29.054560 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 30 23:55:29.062624 kernel: Console: switching to colour frame buffer device 160x50 Oct 30 23:55:29.062949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:29.083406 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 30 23:55:29.093275 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 30 23:55:29.114236 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 23:55:29.118117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:55:29.119426 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:29.122340 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:55:29.132721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:29.142550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 23:55:29.201718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:29.208947 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 30 23:55:29.217987 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 30 23:55:29.234994 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 30 23:55:29.271573 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 30 23:55:29.274768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:55:29.275789 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 23:55:29.276633 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 23:55:29.277474 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 23:55:29.278627 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 23:55:29.279637 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 23:55:29.280400 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 23:55:29.281281 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 23:55:29.281326 systemd[1]: Reached target paths.target - Path Units. Oct 30 23:55:29.281930 systemd[1]: Reached target timers.target - Timer Units. Oct 30 23:55:29.284992 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 23:55:29.288916 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 23:55:29.293824 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 23:55:29.295304 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 23:55:29.296273 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 23:55:29.309501 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 23:55:29.311285 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 23:55:29.331901 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 30 23:55:29.335721 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 23:55:29.336933 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 23:55:29.337706 systemd[1]: Reached target basic.target - Basic System. Oct 30 23:55:29.338271 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 30 23:55:29.338309 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 23:55:29.338339 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 23:55:29.345931 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 23:55:29.363283 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 30 23:55:29.368136 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 23:55:29.371788 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 23:55:29.385797 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 23:55:29.386543 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 23:55:29.388758 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 23:55:29.395736 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 23:55:29.399608 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 30 23:55:29.410571 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 23:55:29.442791 jq[1473]: false Oct 30 23:55:29.443183 coreos-metadata[1469]: Oct 30 23:55:29.419 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 30 23:55:29.443183 coreos-metadata[1469]: Oct 30 23:55:29.422 INFO Fetch successful Oct 30 23:55:29.443183 coreos-metadata[1469]: Oct 30 23:55:29.423 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 30 23:55:29.443183 coreos-metadata[1469]: Oct 30 23:55:29.425 INFO Fetch successful Oct 30 23:55:29.420100 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 23:55:29.437904 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 23:55:29.440028 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 23:55:29.441956 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 23:55:29.445838 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 23:55:29.459703 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 23:55:29.465613 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 30 23:55:29.478246 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 23:55:29.479633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 23:55:29.502320 dbus-daemon[1472]: [system] SELinux support is enabled Oct 30 23:55:29.503567 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 23:55:29.512756 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 23:55:29.515158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 23:55:29.518000 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 23:55:29.518091 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 23:55:29.522305 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 23:55:29.525995 jq[1482]: true Oct 30 23:55:29.522342 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 23:55:29.593914 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 23:55:29.597433 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 23:55:29.605633 extend-filesystems[1474]: Found loop4 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found loop5 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found loop6 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found loop7 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda1 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda2 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda3 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found usr Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda4 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda6 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda7 Oct 30 23:55:29.605633 extend-filesystems[1474]: Found sda9 Oct 30 23:55:29.605633 extend-filesystems[1474]: Checking size of /dev/sda9 Oct 30 23:55:29.663729 tar[1495]: linux-arm64/LICENSE Oct 30 23:55:29.663729 tar[1495]: linux-arm64/helm Oct 30 23:55:29.610390 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 23:55:29.664299 jq[1496]: true Oct 30 23:55:29.613879 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 23:55:29.717484 update_engine[1481]: I20251030 23:55:29.714849 1481 main.cc:92] Flatcar Update Engine starting Oct 30 23:55:29.733681 systemd[1]: Started update-engine.service - Update Engine. Oct 30 23:55:29.740210 update_engine[1481]: I20251030 23:55:29.739076 1481 update_check_scheduler.cc:74] Next update check in 3m21s Oct 30 23:55:29.744764 extend-filesystems[1474]: Resized partition /dev/sda9 Oct 30 23:55:29.752834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 23:55:29.759783 systemd-logind[1480]: New seat seat0. Oct 30 23:55:29.775073 extend-filesystems[1528]: resize2fs 1.47.1 (20-May-2024) Oct 30 23:55:29.773650 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (Power Button) Oct 30 23:55:29.773674 systemd-logind[1480]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 30 23:55:29.774100 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 23:55:29.787554 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 30 23:55:29.791383 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 23:55:29.798867 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 30 23:55:29.872776 systemd-networkd[1396]: eth1: Gained IPv6LL Oct 30 23:55:29.877834 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 30 23:55:29.889198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 23:55:29.898194 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 23:55:29.912952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:55:29.929089 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 23:55:30.029908 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Oct 30 23:55:30.051936 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1417) Oct 30 23:55:30.058145 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 23:55:30.080873 systemd[1]: Starting sshkeys.service... Oct 30 23:55:30.128552 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 23:55:30.152555 containerd[1497]: time="2025-10-30T23:55:30.152121320Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Oct 30 23:55:30.204962 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 30 23:55:30.241253 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 30 23:55:30.256678 systemd-networkd[1396]: eth0: Gained IPv6LL Oct 30 23:55:30.258635 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 30 23:55:30.267427 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 30 23:55:30.298759 containerd[1497]: time="2025-10-30T23:55:30.297075360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.304362 containerd[1497]: time="2025-10-30T23:55:30.303875400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:30.304362 containerd[1497]: time="2025-10-30T23:55:30.303956120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 30 23:55:30.304362 containerd[1497]: time="2025-10-30T23:55:30.304051840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 30 23:55:30.304362 containerd[1497]: time="2025-10-30T23:55:30.304299920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 30 23:55:30.304362 containerd[1497]: time="2025-10-30T23:55:30.304321640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.304731 containerd[1497]: time="2025-10-30T23:55:30.304400600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:30.304731 containerd[1497]: time="2025-10-30T23:55:30.304418080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.309342 extend-filesystems[1528]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 30 23:55:30.309342 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 30 23:55:30.309342 extend-filesystems[1528]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 30 23:55:30.327978 extend-filesystems[1474]: Resized filesystem in /dev/sda9 Oct 30 23:55:30.327978 extend-filesystems[1474]: Found sr0 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.318154720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.318218480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.318243960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.318260840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.322132120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.322555960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.322875600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.322980400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.323223360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 30 23:55:30.334706 containerd[1497]: time="2025-10-30T23:55:30.323290320Z" level=info msg="metadata content store policy set" policy=shared Oct 30 23:55:30.314378 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 23:55:30.316062 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 23:55:30.357484 coreos-metadata[1564]: Oct 30 23:55:30.356 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 30 23:55:30.361502 coreos-metadata[1564]: Oct 30 23:55:30.360 INFO Fetch successful Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.361806600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.361933320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.361957560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.361983560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.362059160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.362343040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.362889080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363109360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363131760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363153360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363170560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363186600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363204720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.363949 containerd[1497]: time="2025-10-30T23:55:30.363222160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363246280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363268000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363285240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363303520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363337240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363361280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363378440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363396520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363411280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.364542 containerd[1497]: time="2025-10-30T23:55:30.363428200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.363443840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366156840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366202640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366236920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366254560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366278400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366294600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366315160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366355680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366371640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366383640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366641760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366669880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 30 23:55:30.369766 containerd[1497]: time="2025-10-30T23:55:30.366682320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 30 23:55:30.369709 unknown[1564]: wrote ssh authorized keys file for user: core Oct 30 23:55:30.370574 containerd[1497]: time="2025-10-30T23:55:30.366697200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 30 23:55:30.370574 containerd[1497]: time="2025-10-30T23:55:30.366710040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.370574 containerd[1497]: time="2025-10-30T23:55:30.366727400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 30 23:55:30.370574 containerd[1497]: time="2025-10-30T23:55:30.366742040Z" level=info msg="NRI interface is disabled by configuration." Oct 30 23:55:30.370574 containerd[1497]: time="2025-10-30T23:55:30.366756360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 30 23:55:30.370670 containerd[1497]: time="2025-10-30T23:55:30.367271040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 30 23:55:30.370670 containerd[1497]: time="2025-10-30T23:55:30.367341840Z" level=info msg="Connect containerd service" Oct 30 23:55:30.370670 containerd[1497]: time="2025-10-30T23:55:30.367409040Z" level=info msg="using legacy CRI server" Oct 30 23:55:30.370670 containerd[1497]: time="2025-10-30T23:55:30.367417600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 23:55:30.380055 containerd[1497]: time="2025-10-30T23:55:30.375545320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 30 23:55:30.380799 containerd[1497]: time="2025-10-30T23:55:30.380749600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 23:55:30.384800 containerd[1497]: time="2025-10-30T23:55:30.383718440Z" level=info msg="Start subscribing containerd event" Oct 30 23:55:30.384800 containerd[1497]: time="2025-10-30T23:55:30.383875160Z" level=info msg="Start recovering state" Oct 30 23:55:30.384800 containerd[1497]: time="2025-10-30T23:55:30.384067480Z" level=info msg="Start event monitor" Oct 30 23:55:30.384800 containerd[1497]: time="2025-10-30T23:55:30.384086000Z" level=info msg="Start snapshots syncer" Oct 30 23:55:30.384800 containerd[1497]: time="2025-10-30T23:55:30.384103720Z" level=info msg="Start cni network conf syncer for default" Oct 30 23:55:30.384800 containerd[1497]: time="2025-10-30T23:55:30.384113200Z" level=info msg="Start streaming server" Oct 30 23:55:30.410175 containerd[1497]: time="2025-10-30T23:55:30.404698160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 23:55:30.410175 containerd[1497]: time="2025-10-30T23:55:30.404832400Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 23:55:30.410175 containerd[1497]: time="2025-10-30T23:55:30.404920160Z" level=info msg="containerd successfully booted in 0.257502s" Oct 30 23:55:30.405719 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 23:55:30.461140 update-ssh-keys[1572]: Updated "/home/core/.ssh/authorized_keys" Oct 30 23:55:30.462598 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 30 23:55:30.463820 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 23:55:30.472029 systemd[1]: Finished sshkeys.service. Oct 30 23:55:30.921731 tar[1495]: linux-arm64/README.md Oct 30 23:55:30.957545 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 23:55:31.253572 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 23:55:31.299119 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 23:55:31.313942 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 23:55:31.321249 systemd[1]: Started sshd@0-91.99.146.238:22-103.165.139.150:59450.service - OpenSSH per-connection server daemon (103.165.139.150:59450). Oct 30 23:55:31.336797 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 23:55:31.337145 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 23:55:31.351441 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 23:55:31.390588 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 23:55:31.401195 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 23:55:31.411276 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 30 23:55:31.412944 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 23:55:31.436416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:55:31.439030 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 23:55:31.439650 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:55:31.440848 systemd[1]: Startup finished in 1.015s (kernel) + 7.359s (initrd) + 6.409s (userspace) = 14.784s. Oct 30 23:55:32.144416 kubelet[1605]: E1030 23:55:32.144251 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:55:32.149804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:55:32.150172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:55:32.151042 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 256.3M memory peak. Oct 30 23:55:32.365116 sshd[1590]: Invalid user heitor from 103.165.139.150 port 59450 Oct 30 23:55:32.554790 sshd[1590]: Received disconnect from 103.165.139.150 port 59450:11: Bye Bye [preauth] Oct 30 23:55:32.554790 sshd[1590]: Disconnected from invalid user heitor 103.165.139.150 port 59450 [preauth] Oct 30 23:55:32.560037 systemd[1]: sshd@0-91.99.146.238:22-103.165.139.150:59450.service: Deactivated successfully. Oct 30 23:55:33.102487 systemd[1]: Started sshd@1-91.99.146.238:22-152.89.168.4:53434.service - OpenSSH per-connection server daemon (152.89.168.4:53434). Oct 30 23:55:35.116431 sshd[1619]: Invalid user pkms from 152.89.168.4 port 53434 Oct 30 23:55:35.174816 sshd[1619]: Received disconnect from 152.89.168.4 port 53434:11: Bye Bye [preauth] Oct 30 23:55:35.174816 sshd[1619]: Disconnected from invalid user pkms 152.89.168.4 port 53434 [preauth] Oct 30 23:55:35.179527 systemd[1]: sshd@1-91.99.146.238:22-152.89.168.4:53434.service: Deactivated successfully. Oct 30 23:55:42.376607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 23:55:42.385051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:55:42.516715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:55:42.522694 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:55:42.592525 kubelet[1631]: E1030 23:55:42.592471 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:55:42.596788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:55:42.597618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:55:42.598644 systemd[1]: kubelet.service: Consumed 187ms CPU time, 110M memory peak. Oct 30 23:55:52.626875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 23:55:52.636885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:55:52.776858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:55:52.779741 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:55:52.844761 kubelet[1646]: E1030 23:55:52.844652 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:55:52.849762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:55:52.850069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:55:52.850782 systemd[1]: kubelet.service: Consumed 183ms CPU time, 105.2M memory peak. Oct 30 23:56:00.073819 systemd[1]: Started sshd@2-91.99.146.238:22-139.178.89.65:35754.service - OpenSSH per-connection server daemon (139.178.89.65:35754). Oct 30 23:56:00.570300 systemd-timesyncd[1370]: Contacted time server 91.132.146.190:123 (2.flatcar.pool.ntp.org). Oct 30 23:56:00.572668 systemd-timesyncd[1370]: Initial clock synchronization to Thu 2025-10-30 23:56:00.211544 UTC. Oct 30 23:56:01.032660 sshd[1654]: Accepted publickey for core from 139.178.89.65 port 35754 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:01.036512 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:01.049936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 23:56:01.067740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 23:56:01.088174 systemd-logind[1480]: New session 1 of user core. Oct 30 23:56:01.095494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 23:56:01.108070 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 23:56:01.113360 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 23:56:01.118165 systemd-logind[1480]: New session c1 of user core. Oct 30 23:56:01.270895 systemd[1658]: Queued start job for default target default.target. Oct 30 23:56:01.280866 systemd[1658]: Created slice app.slice - User Application Slice. Oct 30 23:56:01.281276 systemd[1658]: Reached target paths.target - Paths. Oct 30 23:56:01.281542 systemd[1658]: Reached target timers.target - Timers. Oct 30 23:56:01.285230 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 23:56:01.303512 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 23:56:01.303677 systemd[1658]: Reached target sockets.target - Sockets. Oct 30 23:56:01.303745 systemd[1658]: Reached target basic.target - Basic System. Oct 30 23:56:01.303784 systemd[1658]: Reached target default.target - Main User Target. Oct 30 23:56:01.303816 systemd[1658]: Startup finished in 175ms. Oct 30 23:56:01.304023 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 23:56:01.311751 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 23:56:01.956949 systemd[1]: Started sshd@3-91.99.146.238:22-139.178.89.65:35768.service - OpenSSH per-connection server daemon (139.178.89.65:35768). Oct 30 23:56:02.872321 sshd[1669]: Accepted publickey for core from 139.178.89.65 port 35768 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:02.878500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 23:56:02.878800 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:02.886895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:02.899895 systemd-logind[1480]: New session 2 of user core. Oct 30 23:56:02.908271 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 23:56:03.068877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:03.073527 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:03.136075 kubelet[1680]: E1030 23:56:03.136008 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:03.141326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:03.141631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:03.145961 systemd[1]: kubelet.service: Consumed 214ms CPU time, 109.1M memory peak. Oct 30 23:56:03.501259 sshd[1674]: Connection closed by 139.178.89.65 port 35768 Oct 30 23:56:03.502299 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:03.510975 systemd[1]: sshd@3-91.99.146.238:22-139.178.89.65:35768.service: Deactivated successfully. Oct 30 23:56:03.516383 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 23:56:03.519999 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Oct 30 23:56:03.522309 systemd-logind[1480]: Removed session 2. Oct 30 23:56:03.673932 systemd[1]: Started sshd@4-91.99.146.238:22-139.178.89.65:35782.service - OpenSSH per-connection server daemon (139.178.89.65:35782). Oct 30 23:56:04.595270 sshd[1692]: Accepted publickey for core from 139.178.89.65 port 35782 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:04.599221 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:04.613117 systemd-logind[1480]: New session 3 of user core. Oct 30 23:56:04.621442 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 23:56:05.224555 sshd[1694]: Connection closed by 139.178.89.65 port 35782 Oct 30 23:56:05.225316 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:05.231163 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Oct 30 23:56:05.232958 systemd[1]: sshd@4-91.99.146.238:22-139.178.89.65:35782.service: Deactivated successfully. Oct 30 23:56:05.238413 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 23:56:05.240138 systemd-logind[1480]: Removed session 3. Oct 30 23:56:05.397029 systemd[1]: Started sshd@5-91.99.146.238:22-139.178.89.65:35786.service - OpenSSH per-connection server daemon (139.178.89.65:35786). Oct 30 23:56:06.326470 sshd[1700]: Accepted publickey for core from 139.178.89.65 port 35786 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:06.330157 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:06.339151 systemd-logind[1480]: New session 4 of user core. Oct 30 23:56:06.352347 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 23:56:06.453348 systemd[1]: Started sshd@6-91.99.146.238:22-83.118.24.18:37080.service - OpenSSH per-connection server daemon (83.118.24.18:37080). Oct 30 23:56:06.971986 sshd[1702]: Connection closed by 139.178.89.65 port 35786 Oct 30 23:56:06.971742 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:06.983845 systemd[1]: sshd@5-91.99.146.238:22-139.178.89.65:35786.service: Deactivated successfully. Oct 30 23:56:06.987241 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 23:56:06.992287 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Oct 30 23:56:06.994198 systemd-logind[1480]: Removed session 4. Oct 30 23:56:07.145952 systemd[1]: Started sshd@7-91.99.146.238:22-139.178.89.65:41762.service - OpenSSH per-connection server daemon (139.178.89.65:41762). Oct 30 23:56:07.527867 sshd[1704]: Invalid user babu from 83.118.24.18 port 37080 Oct 30 23:56:07.734337 sshd[1704]: Received disconnect from 83.118.24.18 port 37080:11: Bye Bye [preauth] Oct 30 23:56:07.734337 sshd[1704]: Disconnected from invalid user babu 83.118.24.18 port 37080 [preauth] Oct 30 23:56:07.742569 systemd[1]: sshd@6-91.99.146.238:22-83.118.24.18:37080.service: Deactivated successfully. Oct 30 23:56:08.078319 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 41762 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:08.081358 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:08.089301 systemd-logind[1480]: New session 5 of user core. Oct 30 23:56:08.105867 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 23:56:08.592345 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 23:56:08.593663 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:08.614294 sudo[1716]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:08.765756 sshd[1715]: Connection closed by 139.178.89.65 port 41762 Oct 30 23:56:08.766880 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:08.779951 systemd[1]: sshd@7-91.99.146.238:22-139.178.89.65:41762.service: Deactivated successfully. Oct 30 23:56:08.785355 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 23:56:08.788672 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Oct 30 23:56:08.792637 systemd-logind[1480]: Removed session 5. Oct 30 23:56:08.942399 systemd[1]: Started sshd@8-91.99.146.238:22-139.178.89.65:41770.service - OpenSSH per-connection server daemon (139.178.89.65:41770). Oct 30 23:56:09.901208 sshd[1722]: Accepted publickey for core from 139.178.89.65 port 41770 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:09.905374 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:09.919887 systemd-logind[1480]: New session 6 of user core. Oct 30 23:56:09.939672 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 23:56:10.405258 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 23:56:10.405642 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:10.414028 sudo[1726]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:10.426301 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 23:56:10.427429 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:10.449823 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:56:10.499076 augenrules[1748]: No rules Oct 30 23:56:10.501694 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:56:10.503753 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:56:10.506788 sudo[1725]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:10.659187 sshd[1724]: Connection closed by 139.178.89.65 port 41770 Oct 30 23:56:10.658774 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:10.669433 systemd[1]: sshd@8-91.99.146.238:22-139.178.89.65:41770.service: Deactivated successfully. Oct 30 23:56:10.672749 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 23:56:10.674185 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Oct 30 23:56:10.677650 systemd-logind[1480]: Removed session 6. Oct 30 23:56:10.837828 systemd[1]: Started sshd@9-91.99.146.238:22-139.178.89.65:41786.service - OpenSSH per-connection server daemon (139.178.89.65:41786). Oct 30 23:56:11.802628 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 41786 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:56:11.805202 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:11.814867 systemd-logind[1480]: New session 7 of user core. Oct 30 23:56:11.818712 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 23:56:12.313109 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 23:56:12.317610 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:12.730046 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 23:56:12.740325 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 23:56:13.057707 dockerd[1778]: time="2025-10-30T23:56:13.055907635Z" level=info msg="Starting up" Oct 30 23:56:13.167900 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2669954623-merged.mount: Deactivated successfully. Oct 30 23:56:13.169661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 30 23:56:13.174874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:13.239437 dockerd[1778]: time="2025-10-30T23:56:13.238666661Z" level=info msg="Loading containers: start." Oct 30 23:56:13.371842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:13.374900 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:13.446603 kubelet[1847]: E1030 23:56:13.446525 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:13.449405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:13.449613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:13.451105 systemd[1]: kubelet.service: Consumed 188ms CPU time, 105.1M memory peak. Oct 30 23:56:13.516869 kernel: Initializing XFRM netlink socket Oct 30 23:56:13.638615 systemd-networkd[1396]: docker0: Link UP Oct 30 23:56:13.693675 dockerd[1778]: time="2025-10-30T23:56:13.693573595Z" level=info msg="Loading containers: done." Oct 30 23:56:13.721631 dockerd[1778]: time="2025-10-30T23:56:13.721050915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 23:56:13.721631 dockerd[1778]: time="2025-10-30T23:56:13.721233492Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Oct 30 23:56:13.721631 dockerd[1778]: time="2025-10-30T23:56:13.721589847Z" level=info msg="Daemon has completed initialization" Oct 30 23:56:13.788597 dockerd[1778]: time="2025-10-30T23:56:13.788132156Z" level=info msg="API listen on /run/docker.sock" Oct 30 23:56:13.788420 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 23:56:14.658557 update_engine[1481]: I20251030 23:56:14.658278 1481 update_attempter.cc:509] Updating boot flags... Oct 30 23:56:14.726487 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1988) Oct 30 23:56:14.835522 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1990) Oct 30 23:56:15.033555 containerd[1497]: time="2025-10-30T23:56:15.033354130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 30 23:56:15.804697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577801.mount: Deactivated successfully. Oct 30 23:56:17.338471 containerd[1497]: time="2025-10-30T23:56:17.338398064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:17.341176 containerd[1497]: time="2025-10-30T23:56:17.341113900Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363783" Oct 30 23:56:17.346294 containerd[1497]: time="2025-10-30T23:56:17.344997605Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:17.349242 containerd[1497]: time="2025-10-30T23:56:17.349139139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:17.350406 containerd[1497]: time="2025-10-30T23:56:17.350041261Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.315117551s" Oct 30 23:56:17.350406 containerd[1497]: time="2025-10-30T23:56:17.350109697Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 30 23:56:17.351170 containerd[1497]: time="2025-10-30T23:56:17.351103213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 30 23:56:18.990231 containerd[1497]: time="2025-10-30T23:56:18.990148913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:18.993276 containerd[1497]: time="2025-10-30T23:56:18.993172213Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531220" Oct 30 23:56:18.995701 containerd[1497]: time="2025-10-30T23:56:18.995631696Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:19.004020 containerd[1497]: time="2025-10-30T23:56:19.003860273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:19.006780 containerd[1497]: time="2025-10-30T23:56:19.006693989Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.654565971s" Oct 30 23:56:19.006780 containerd[1497]: time="2025-10-30T23:56:19.006757610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 30 23:56:19.008289 containerd[1497]: time="2025-10-30T23:56:19.008221696Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 30 23:56:20.424195 containerd[1497]: time="2025-10-30T23:56:20.424083037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:20.430973 containerd[1497]: time="2025-10-30T23:56:20.430867028Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484344" Oct 30 23:56:20.435123 containerd[1497]: time="2025-10-30T23:56:20.435036519Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:20.440631 containerd[1497]: time="2025-10-30T23:56:20.440532571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:20.442277 containerd[1497]: time="2025-10-30T23:56:20.442193384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.433894633s" Oct 30 23:56:20.442718 containerd[1497]: time="2025-10-30T23:56:20.442482317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 30 23:56:20.444030 containerd[1497]: time="2025-10-30T23:56:20.443977120Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 23:56:21.969366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873311920.mount: Deactivated successfully. Oct 30 23:56:22.289752 containerd[1497]: time="2025-10-30T23:56:22.288686131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:22.291009 containerd[1497]: time="2025-10-30T23:56:22.290369032Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417843" Oct 30 23:56:22.292659 containerd[1497]: time="2025-10-30T23:56:22.292548901Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:22.297608 containerd[1497]: time="2025-10-30T23:56:22.297337153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:22.299233 containerd[1497]: time="2025-10-30T23:56:22.298693304Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.854655901s" Oct 30 23:56:22.299233 containerd[1497]: time="2025-10-30T23:56:22.298766026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 30 23:56:22.300419 containerd[1497]: time="2025-10-30T23:56:22.299747794Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 23:56:23.135172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094749262.mount: Deactivated successfully. Oct 30 23:56:23.634178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 30 23:56:23.647703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:23.824911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:23.826771 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:23.894226 kubelet[2115]: E1030 23:56:23.894015 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:23.897531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:23.898587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:23.900619 systemd[1]: kubelet.service: Consumed 192ms CPU time, 109.1M memory peak. Oct 30 23:56:24.236668 containerd[1497]: time="2025-10-30T23:56:24.235766693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:24.240049 containerd[1497]: time="2025-10-30T23:56:24.239968785Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Oct 30 23:56:24.242941 containerd[1497]: time="2025-10-30T23:56:24.242849428Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:24.253542 containerd[1497]: time="2025-10-30T23:56:24.253395274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:24.256806 containerd[1497]: time="2025-10-30T23:56:24.256291724Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.956489587s" Oct 30 23:56:24.256806 containerd[1497]: time="2025-10-30T23:56:24.256350442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 30 23:56:24.257582 containerd[1497]: time="2025-10-30T23:56:24.257016296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 23:56:24.853741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257154904.mount: Deactivated successfully. Oct 30 23:56:24.872669 containerd[1497]: time="2025-10-30T23:56:24.871780206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:24.874528 containerd[1497]: time="2025-10-30T23:56:24.874102452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Oct 30 23:56:24.876077 containerd[1497]: time="2025-10-30T23:56:24.875983376Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:24.882802 containerd[1497]: time="2025-10-30T23:56:24.881484763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:24.883517 containerd[1497]: time="2025-10-30T23:56:24.883338305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 626.277741ms" Oct 30 23:56:24.883517 containerd[1497]: time="2025-10-30T23:56:24.883482844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 30 23:56:24.884875 containerd[1497]: time="2025-10-30T23:56:24.884598441Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 23:56:25.626577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416243379.mount: Deactivated successfully. Oct 30 23:56:27.926264 containerd[1497]: time="2025-10-30T23:56:27.925881993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:27.928725 containerd[1497]: time="2025-10-30T23:56:27.928642585Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Oct 30 23:56:27.929890 containerd[1497]: time="2025-10-30T23:56:27.929774927Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:27.934883 containerd[1497]: time="2025-10-30T23:56:27.934805956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:27.937508 containerd[1497]: time="2025-10-30T23:56:27.936478025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.051833472s" Oct 30 23:56:27.937508 containerd[1497]: time="2025-10-30T23:56:27.936535266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 30 23:56:33.800046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:33.800981 systemd[1]: kubelet.service: Consumed 192ms CPU time, 109.1M memory peak. Oct 30 23:56:33.812905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:33.848379 systemd[1]: Reload requested from client PID 2224 ('systemctl') (unit session-7.scope)... Oct 30 23:56:33.848415 systemd[1]: Reloading... Oct 30 23:56:34.015539 zram_generator::config[2270]: No configuration found. Oct 30 23:56:34.138440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:56:34.237365 systemd[1]: Reloading finished in 388 ms. Oct 30 23:56:34.318972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:34.321931 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:56:34.328684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:34.329048 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 23:56:34.329289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:34.329342 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.5M memory peak. Oct 30 23:56:34.341175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:34.523801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:34.525825 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:56:34.600013 kubelet[2323]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:56:34.600013 kubelet[2323]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 23:56:34.600013 kubelet[2323]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:56:34.600644 kubelet[2323]: I1030 23:56:34.600079 2323 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 23:56:35.067395 kubelet[2323]: I1030 23:56:35.067295 2323 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 23:56:35.067395 kubelet[2323]: I1030 23:56:35.067360 2323 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 23:56:35.068123 kubelet[2323]: I1030 23:56:35.068064 2323 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 23:56:35.108314 kubelet[2323]: E1030 23:56:35.108251 2323 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.146.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:35.109831 kubelet[2323]: I1030 23:56:35.109534 2323 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 23:56:35.127961 kubelet[2323]: E1030 23:56:35.127751 2323 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 30 23:56:35.127961 kubelet[2323]: I1030 23:56:35.127832 2323 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 30 23:56:35.132894 kubelet[2323]: I1030 23:56:35.132028 2323 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 23:56:35.135112 kubelet[2323]: I1030 23:56:35.134908 2323 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 23:56:35.135636 kubelet[2323]: I1030 23:56:35.135145 2323 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-4-n-ab7d00e960","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 23:56:35.135984 kubelet[2323]: I1030 23:56:35.135922 2323 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 23:56:35.135984 kubelet[2323]: I1030 23:56:35.135963 2323 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 23:56:35.136606 kubelet[2323]: I1030 23:56:35.136559 2323 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:56:35.141472 kubelet[2323]: I1030 23:56:35.141180 2323 kubelet.go:446] "Attempting to sync node with API server" Oct 30 23:56:35.141472 kubelet[2323]: I1030 23:56:35.141225 2323 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 23:56:35.141472 kubelet[2323]: I1030 23:56:35.141260 2323 kubelet.go:352] "Adding apiserver pod source" Oct 30 23:56:35.141472 kubelet[2323]: I1030 23:56:35.141275 2323 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 23:56:35.145625 kubelet[2323]: W1030 23:56:35.144676 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.146.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-ab7d00e960&limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:35.145625 kubelet[2323]: E1030 23:56:35.144860 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.146.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-ab7d00e960&limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:35.146557 kubelet[2323]: W1030 23:56:35.146168 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.146.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:35.146557 kubelet[2323]: E1030 23:56:35.146219 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.146.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:35.146796 kubelet[2323]: I1030 23:56:35.146712 2323 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 30 23:56:35.147635 kubelet[2323]: I1030 23:56:35.147600 2323 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 23:56:35.147832 kubelet[2323]: W1030 23:56:35.147812 2323 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 23:56:35.150214 kubelet[2323]: I1030 23:56:35.150146 2323 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 23:56:35.150214 kubelet[2323]: I1030 23:56:35.150229 2323 server.go:1287] "Started kubelet" Oct 30 23:56:35.152860 kubelet[2323]: I1030 23:56:35.152812 2323 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 23:56:35.161931 kubelet[2323]: I1030 23:56:35.161774 2323 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 23:56:35.165092 kubelet[2323]: I1030 23:56:35.164757 2323 server.go:479] "Adding debug handlers to kubelet server" Oct 30 23:56:35.167205 kubelet[2323]: I1030 23:56:35.167014 2323 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 23:56:35.167873 kubelet[2323]: I1030 23:56:35.167373 2323 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 23:56:35.168597 kubelet[2323]: I1030 23:56:35.168158 2323 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 23:56:35.169136 kubelet[2323]: I1030 23:56:35.169102 2323 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 23:56:35.171487 kubelet[2323]: I1030 23:56:35.171115 2323 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 23:56:35.171487 kubelet[2323]: I1030 23:56:35.171221 2323 reconciler.go:26] "Reconciler: start to sync state" Oct 30 23:56:35.173215 kubelet[2323]: E1030 23:56:35.173151 2323 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" Oct 30 23:56:35.174703 kubelet[2323]: E1030 23:56:35.174228 2323 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.146.238:6443/api/v1/namespaces/default/events\": dial tcp 91.99.146.238:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-4-n-ab7d00e960.18736a270ad7f375 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-4-n-ab7d00e960,UID:ci-4230-2-4-n-ab7d00e960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-ab7d00e960,},FirstTimestamp:2025-10-30 23:56:35.150189429 +0000 UTC m=+0.613348337,LastTimestamp:2025-10-30 23:56:35.150189429 +0000 UTC m=+0.613348337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-ab7d00e960,}" Oct 30 23:56:35.176519 kubelet[2323]: W1030 23:56:35.175294 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.146.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:35.176519 kubelet[2323]: E1030 23:56:35.175379 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.146.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:35.176519 kubelet[2323]: E1030 23:56:35.175492 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-ab7d00e960?timeout=10s\": dial tcp 91.99.146.238:6443: connect: connection refused" interval="200ms" Oct 30 23:56:35.176519 kubelet[2323]: I1030 23:56:35.176091 2323 factory.go:221] Registration of the systemd container factory successfully Oct 30 23:56:35.176519 kubelet[2323]: I1030 23:56:35.176212 2323 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 23:56:35.180100 kubelet[2323]: I1030 23:56:35.180063 2323 factory.go:221] Registration of the containerd container factory successfully Oct 30 23:56:35.186240 kubelet[2323]: E1030 23:56:35.186176 2323 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 23:56:35.207689 kubelet[2323]: I1030 23:56:35.207617 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 23:56:35.210522 kubelet[2323]: I1030 23:56:35.210214 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 23:56:35.210522 kubelet[2323]: I1030 23:56:35.210263 2323 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 23:56:35.210522 kubelet[2323]: I1030 23:56:35.210300 2323 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 23:56:35.210522 kubelet[2323]: I1030 23:56:35.210310 2323 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 23:56:35.210522 kubelet[2323]: E1030 23:56:35.210369 2323 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 23:56:35.218820 kubelet[2323]: W1030 23:56:35.218264 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.146.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:35.218820 kubelet[2323]: E1030 23:56:35.218340 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.146.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:35.219031 kubelet[2323]: I1030 23:56:35.218912 2323 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 23:56:35.219031 kubelet[2323]: I1030 23:56:35.218931 2323 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 23:56:35.219031 kubelet[2323]: I1030 23:56:35.218956 2323 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:56:35.221861 kubelet[2323]: I1030 23:56:35.221718 2323 policy_none.go:49] "None policy: Start" Oct 30 23:56:35.221861 kubelet[2323]: I1030 23:56:35.221784 2323 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 23:56:35.221861 kubelet[2323]: I1030 23:56:35.221803 2323 state_mem.go:35] "Initializing new in-memory state store" Oct 30 23:56:35.230013 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 23:56:35.251277 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 23:56:35.257899 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 23:56:35.271240 kubelet[2323]: I1030 23:56:35.270500 2323 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 23:56:35.271655 kubelet[2323]: I1030 23:56:35.271545 2323 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 23:56:35.272680 kubelet[2323]: I1030 23:56:35.271684 2323 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 23:56:35.274447 kubelet[2323]: E1030 23:56:35.274398 2323 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 23:56:35.275497 kubelet[2323]: E1030 23:56:35.275254 2323 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-4-n-ab7d00e960\" not found" Oct 30 23:56:35.275497 kubelet[2323]: I1030 23:56:35.274950 2323 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 23:56:35.329320 systemd[1]: Created slice kubepods-burstable-pode8698ef9915cad84386c5a5e97817109.slice - libcontainer container kubepods-burstable-pode8698ef9915cad84386c5a5e97817109.slice. Oct 30 23:56:35.352101 kubelet[2323]: E1030 23:56:35.351086 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.358983 systemd[1]: Created slice kubepods-burstable-pod58449a187481756fd356a40e22bdaf95.slice - libcontainer container kubepods-burstable-pod58449a187481756fd356a40e22bdaf95.slice. Oct 30 23:56:35.365189 kubelet[2323]: E1030 23:56:35.364597 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.369762 systemd[1]: Created slice kubepods-burstable-poddde51cac144a74479aaf11667219fa6d.slice - libcontainer container kubepods-burstable-poddde51cac144a74479aaf11667219fa6d.slice. Oct 30 23:56:35.371830 kubelet[2323]: I1030 23:56:35.371732 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8698ef9915cad84386c5a5e97817109-k8s-certs\") pod \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" (UID: \"e8698ef9915cad84386c5a5e97817109\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.371830 kubelet[2323]: I1030 23:56:35.371793 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.371830 kubelet[2323]: I1030 23:56:35.371819 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.371830 kubelet[2323]: I1030 23:56:35.371839 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dde51cac144a74479aaf11667219fa6d-kubeconfig\") pod \"kube-scheduler-ci-4230-2-4-n-ab7d00e960\" (UID: \"dde51cac144a74479aaf11667219fa6d\") " pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.372084 kubelet[2323]: I1030 23:56:35.371858 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8698ef9915cad84386c5a5e97817109-ca-certs\") pod \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" (UID: \"e8698ef9915cad84386c5a5e97817109\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.372084 kubelet[2323]: I1030 23:56:35.371878 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8698ef9915cad84386c5a5e97817109-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" (UID: \"e8698ef9915cad84386c5a5e97817109\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.372084 kubelet[2323]: I1030 23:56:35.371895 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-ca-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.372084 kubelet[2323]: I1030 23:56:35.371914 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.372084 kubelet[2323]: I1030 23:56:35.371933 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.374116 kubelet[2323]: E1030 23:56:35.373844 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.376259 kubelet[2323]: I1030 23:56:35.375889 2323 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.376259 kubelet[2323]: E1030 23:56:35.376157 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-ab7d00e960?timeout=10s\": dial tcp 91.99.146.238:6443: connect: connection refused" interval="400ms" Oct 30 23:56:35.376591 kubelet[2323]: E1030 23:56:35.376563 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.146.238:6443/api/v1/nodes\": dial tcp 91.99.146.238:6443: connect: connection refused" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.580790 kubelet[2323]: I1030 23:56:35.579594 2323 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.580790 kubelet[2323]: E1030 23:56:35.580406 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.146.238:6443/api/v1/nodes\": dial tcp 91.99.146.238:6443: connect: connection refused" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.653474 containerd[1497]: time="2025-10-30T23:56:35.653368283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-4-n-ab7d00e960,Uid:e8698ef9915cad84386c5a5e97817109,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:35.667489 containerd[1497]: time="2025-10-30T23:56:35.666939501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-4-n-ab7d00e960,Uid:58449a187481756fd356a40e22bdaf95,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:35.676064 containerd[1497]: time="2025-10-30T23:56:35.675957985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-4-n-ab7d00e960,Uid:dde51cac144a74479aaf11667219fa6d,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:35.777680 kubelet[2323]: E1030 23:56:35.777613 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-ab7d00e960?timeout=10s\": dial tcp 91.99.146.238:6443: connect: connection refused" interval="800ms" Oct 30 23:56:35.986399 kubelet[2323]: I1030 23:56:35.986286 2323 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:35.988650 kubelet[2323]: E1030 23:56:35.988515 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.146.238:6443/api/v1/nodes\": dial tcp 91.99.146.238:6443: connect: connection refused" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:36.250522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284007717.mount: Deactivated successfully. Oct 30 23:56:36.282288 containerd[1497]: time="2025-10-30T23:56:36.281034324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:56:36.290533 containerd[1497]: time="2025-10-30T23:56:36.290414531Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 30 23:56:36.292739 containerd[1497]: time="2025-10-30T23:56:36.292417802Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:56:36.298701 containerd[1497]: time="2025-10-30T23:56:36.297307232Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:56:36.302263 containerd[1497]: time="2025-10-30T23:56:36.302181579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 30 23:56:36.307406 containerd[1497]: time="2025-10-30T23:56:36.307323908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:56:36.310123 containerd[1497]: time="2025-10-30T23:56:36.309971531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:56:36.310312 containerd[1497]: time="2025-10-30T23:56:36.310129408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 30 23:56:36.311499 containerd[1497]: time="2025-10-30T23:56:36.311389105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.883947ms" Oct 30 23:56:36.315808 kubelet[2323]: W1030 23:56:36.315712 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.146.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:36.316619 kubelet[2323]: E1030 23:56:36.315923 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.146.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:36.321303 containerd[1497]: time="2025-10-30T23:56:36.321235381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 645.119717ms" Oct 30 23:56:36.364512 containerd[1497]: time="2025-10-30T23:56:36.364308793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 697.125712ms" Oct 30 23:56:36.424492 kubelet[2323]: W1030 23:56:36.423706 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.146.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:36.424492 kubelet[2323]: E1030 23:56:36.423814 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.146.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:36.469256 containerd[1497]: time="2025-10-30T23:56:36.468750201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:56:36.469256 containerd[1497]: time="2025-10-30T23:56:36.468836302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:56:36.469256 containerd[1497]: time="2025-10-30T23:56:36.468852826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:36.469256 containerd[1497]: time="2025-10-30T23:56:36.468944807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:36.471752 containerd[1497]: time="2025-10-30T23:56:36.470752432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:56:36.471752 containerd[1497]: time="2025-10-30T23:56:36.471626198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:56:36.471752 containerd[1497]: time="2025-10-30T23:56:36.471656365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:36.471993 containerd[1497]: time="2025-10-30T23:56:36.471016334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:56:36.471993 containerd[1497]: time="2025-10-30T23:56:36.471113677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:56:36.471993 containerd[1497]: time="2025-10-30T23:56:36.471129761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:36.471993 containerd[1497]: time="2025-10-30T23:56:36.471236746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:36.476541 containerd[1497]: time="2025-10-30T23:56:36.474690439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:36.488473 kubelet[2323]: W1030 23:56:36.488302 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.146.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-ab7d00e960&limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:36.488473 kubelet[2323]: E1030 23:56:36.488393 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.146.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-ab7d00e960&limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:36.511266 systemd[1]: Started cri-containerd-0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5.scope - libcontainer container 0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5. Oct 30 23:56:36.525969 kubelet[2323]: W1030 23:56:36.525806 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.146.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.146.238:6443: connect: connection refused Oct 30 23:56:36.525969 kubelet[2323]: E1030 23:56:36.525920 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.146.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.146.238:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:56:36.526612 systemd[1]: Started cri-containerd-be4425cac81dc7f1efba362272e8a203ada5068b6ed2f30a7891cdde7547b361.scope - libcontainer container be4425cac81dc7f1efba362272e8a203ada5068b6ed2f30a7891cdde7547b361. Oct 30 23:56:36.529654 systemd[1]: Started cri-containerd-f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a.scope - libcontainer container f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a. Oct 30 23:56:36.579398 kubelet[2323]: E1030 23:56:36.579107 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-ab7d00e960?timeout=10s\": dial tcp 91.99.146.238:6443: connect: connection refused" interval="1.6s" Oct 30 23:56:36.604095 containerd[1497]: time="2025-10-30T23:56:36.603215712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-4-n-ab7d00e960,Uid:e8698ef9915cad84386c5a5e97817109,Namespace:kube-system,Attempt:0,} returns sandbox id \"be4425cac81dc7f1efba362272e8a203ada5068b6ed2f30a7891cdde7547b361\"" Oct 30 23:56:36.610951 containerd[1497]: time="2025-10-30T23:56:36.610562680Z" level=info msg="CreateContainer within sandbox \"be4425cac81dc7f1efba362272e8a203ada5068b6ed2f30a7891cdde7547b361\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 23:56:36.616468 containerd[1497]: time="2025-10-30T23:56:36.616388051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-4-n-ab7d00e960,Uid:58449a187481756fd356a40e22bdaf95,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5\"" Oct 30 23:56:36.621308 containerd[1497]: time="2025-10-30T23:56:36.621127126Z" level=info msg="CreateContainer within sandbox \"0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 23:56:36.630026 containerd[1497]: time="2025-10-30T23:56:36.629971486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-4-n-ab7d00e960,Uid:dde51cac144a74479aaf11667219fa6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a\"" Oct 30 23:56:36.635436 containerd[1497]: time="2025-10-30T23:56:36.635254289Z" level=info msg="CreateContainer within sandbox \"f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 23:56:36.655492 containerd[1497]: time="2025-10-30T23:56:36.655221706Z" level=info msg="CreateContainer within sandbox \"be4425cac81dc7f1efba362272e8a203ada5068b6ed2f30a7891cdde7547b361\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15989c524e322f4b9cd53526f640f913a41d20ecee417675969b79384ba1cacc\"" Oct 30 23:56:36.662665 containerd[1497]: time="2025-10-30T23:56:36.662322736Z" level=info msg="StartContainer for \"15989c524e322f4b9cd53526f640f913a41d20ecee417675969b79384ba1cacc\"" Oct 30 23:56:36.682921 containerd[1497]: time="2025-10-30T23:56:36.682860207Z" level=info msg="CreateContainer within sandbox \"0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20\"" Oct 30 23:56:36.684595 containerd[1497]: time="2025-10-30T23:56:36.684551325Z" level=info msg="StartContainer for \"3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20\"" Oct 30 23:56:36.699161 containerd[1497]: time="2025-10-30T23:56:36.698546897Z" level=info msg="CreateContainer within sandbox \"f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005\"" Oct 30 23:56:36.701468 containerd[1497]: time="2025-10-30T23:56:36.701403249Z" level=info msg="StartContainer for \"2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005\"" Oct 30 23:56:36.717767 systemd[1]: Started cri-containerd-15989c524e322f4b9cd53526f640f913a41d20ecee417675969b79384ba1cacc.scope - libcontainer container 15989c524e322f4b9cd53526f640f913a41d20ecee417675969b79384ba1cacc. Oct 30 23:56:36.735772 systemd[1]: Started cri-containerd-3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20.scope - libcontainer container 3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20. Oct 30 23:56:36.791737 systemd[1]: Started cri-containerd-2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005.scope - libcontainer container 2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005. Oct 30 23:56:36.803796 kubelet[2323]: I1030 23:56:36.802223 2323 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:36.805213 kubelet[2323]: E1030 23:56:36.803800 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.146.238:6443/api/v1/nodes\": dial tcp 91.99.146.238:6443: connect: connection refused" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:36.826594 containerd[1497]: time="2025-10-30T23:56:36.825857405Z" level=info msg="StartContainer for \"15989c524e322f4b9cd53526f640f913a41d20ecee417675969b79384ba1cacc\" returns successfully" Oct 30 23:56:36.834920 containerd[1497]: time="2025-10-30T23:56:36.834739935Z" level=info msg="StartContainer for \"3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20\" returns successfully" Oct 30 23:56:36.895701 containerd[1497]: time="2025-10-30T23:56:36.895624577Z" level=info msg="StartContainer for \"2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005\" returns successfully" Oct 30 23:56:37.240799 kubelet[2323]: E1030 23:56:37.240374 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:37.245074 kubelet[2323]: E1030 23:56:37.245028 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:37.258239 kubelet[2323]: E1030 23:56:37.258167 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:38.253941 kubelet[2323]: E1030 23:56:38.253886 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:38.254428 kubelet[2323]: E1030 23:56:38.254360 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:38.407979 kubelet[2323]: I1030 23:56:38.407927 2323 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.036495 kubelet[2323]: E1030 23:56:40.034357 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.250045 kubelet[2323]: E1030 23:56:40.249956 2323 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-4-n-ab7d00e960\" not found" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.327098 kubelet[2323]: E1030 23:56:40.326009 2323 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-2-4-n-ab7d00e960.18736a270ad7f375 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-4-n-ab7d00e960,UID:ci-4230-2-4-n-ab7d00e960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-ab7d00e960,},FirstTimestamp:2025-10-30 23:56:35.150189429 +0000 UTC m=+0.613348337,LastTimestamp:2025-10-30 23:56:35.150189429 +0000 UTC m=+0.613348337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-ab7d00e960,}" Oct 30 23:56:40.397731 kubelet[2323]: E1030 23:56:40.397465 2323 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-2-4-n-ab7d00e960.18736a270cfca789 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-4-n-ab7d00e960,UID:ci-4230-2-4-n-ab7d00e960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-ab7d00e960,},FirstTimestamp:2025-10-30 23:56:35.186149257 +0000 UTC m=+0.649308165,LastTimestamp:2025-10-30 23:56:35.186149257 +0000 UTC m=+0.649308165,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-ab7d00e960,}" Oct 30 23:56:40.455505 kubelet[2323]: I1030 23:56:40.455220 2323 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.455505 kubelet[2323]: E1030 23:56:40.455276 2323 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-4-n-ab7d00e960\": node \"ci-4230-2-4-n-ab7d00e960\" not found" Oct 30 23:56:40.475301 kubelet[2323]: I1030 23:56:40.474957 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.512574 kubelet[2323]: E1030 23:56:40.512529 2323 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.512574 kubelet[2323]: I1030 23:56:40.512568 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.518697 kubelet[2323]: E1030 23:56:40.518649 2323 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.518697 kubelet[2323]: I1030 23:56:40.518692 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:40.524028 kubelet[2323]: E1030 23:56:40.523934 2323 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-4-n-ab7d00e960\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:41.149299 kubelet[2323]: I1030 23:56:41.149236 2323 apiserver.go:52] "Watching apiserver" Oct 30 23:56:41.172391 kubelet[2323]: I1030 23:56:41.172244 2323 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 23:56:43.017825 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... Oct 30 23:56:43.017849 systemd[1]: Reloading... Oct 30 23:56:43.196572 zram_generator::config[2649]: No configuration found. Oct 30 23:56:43.325862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:56:43.448398 systemd[1]: Reloading finished in 429 ms. Oct 30 23:56:43.483720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:43.497150 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 23:56:43.497810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:43.498066 systemd[1]: kubelet.service: Consumed 1.221s CPU time, 130.1M memory peak. Oct 30 23:56:43.507857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:43.683781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:43.693285 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:56:43.766665 kubelet[2685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:56:43.768494 kubelet[2685]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 23:56:43.768494 kubelet[2685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:56:43.768494 kubelet[2685]: I1030 23:56:43.767420 2685 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 23:56:43.782867 kubelet[2685]: I1030 23:56:43.782820 2685 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 23:56:43.783800 kubelet[2685]: I1030 23:56:43.783746 2685 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 23:56:43.784185 kubelet[2685]: I1030 23:56:43.784156 2685 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 23:56:43.786570 kubelet[2685]: I1030 23:56:43.786523 2685 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 23:56:43.791214 kubelet[2685]: I1030 23:56:43.790978 2685 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 23:56:43.797736 kubelet[2685]: E1030 23:56:43.797684 2685 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 30 23:56:43.799615 kubelet[2685]: I1030 23:56:43.797950 2685 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 30 23:56:43.802616 kubelet[2685]: I1030 23:56:43.802571 2685 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 23:56:43.804870 kubelet[2685]: I1030 23:56:43.804812 2685 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 23:56:43.806378 kubelet[2685]: I1030 23:56:43.806083 2685 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-4-n-ab7d00e960","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 23:56:43.806897 kubelet[2685]: I1030 23:56:43.806633 2685 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 23:56:43.807011 kubelet[2685]: I1030 23:56:43.806995 2685 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 23:56:43.807154 kubelet[2685]: I1030 23:56:43.807143 2685 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:56:43.807526 kubelet[2685]: I1030 23:56:43.807489 2685 kubelet.go:446] "Attempting to sync node with API server" Oct 30 23:56:43.807645 kubelet[2685]: I1030 23:56:43.807632 2685 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 23:56:43.807714 kubelet[2685]: I1030 23:56:43.807704 2685 kubelet.go:352] "Adding apiserver pod source" Oct 30 23:56:43.807774 kubelet[2685]: I1030 23:56:43.807764 2685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 23:56:43.812494 kubelet[2685]: I1030 23:56:43.810992 2685 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 30 23:56:43.812494 kubelet[2685]: I1030 23:56:43.811873 2685 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 23:56:43.812763 kubelet[2685]: I1030 23:56:43.812717 2685 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 23:56:43.812811 kubelet[2685]: I1030 23:56:43.812798 2685 server.go:1287] "Started kubelet" Oct 30 23:56:43.820472 kubelet[2685]: I1030 23:56:43.818113 2685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 23:56:43.834652 kubelet[2685]: I1030 23:56:43.834582 2685 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 23:56:43.835979 kubelet[2685]: I1030 23:56:43.835924 2685 server.go:479] "Adding debug handlers to kubelet server" Oct 30 23:56:43.837962 kubelet[2685]: I1030 23:56:43.837247 2685 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 23:56:43.838627 kubelet[2685]: I1030 23:56:43.837905 2685 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 23:56:43.842989 kubelet[2685]: I1030 23:56:43.842172 2685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 23:56:43.846491 kubelet[2685]: I1030 23:56:43.844069 2685 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 23:56:43.846491 kubelet[2685]: E1030 23:56:43.844610 2685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-4-n-ab7d00e960\" not found" Oct 30 23:56:43.850466 kubelet[2685]: I1030 23:56:43.849265 2685 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 23:56:43.850466 kubelet[2685]: I1030 23:56:43.849630 2685 reconciler.go:26] "Reconciler: start to sync state" Oct 30 23:56:43.853462 kubelet[2685]: I1030 23:56:43.852470 2685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 23:56:43.856459 kubelet[2685]: I1030 23:56:43.854105 2685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 23:56:43.856459 kubelet[2685]: I1030 23:56:43.854157 2685 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 23:56:43.856459 kubelet[2685]: I1030 23:56:43.854179 2685 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 23:56:43.856459 kubelet[2685]: I1030 23:56:43.854187 2685 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 23:56:43.856459 kubelet[2685]: E1030 23:56:43.854239 2685 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 23:56:43.868599 kubelet[2685]: I1030 23:56:43.868539 2685 factory.go:221] Registration of the systemd container factory successfully Oct 30 23:56:43.868783 kubelet[2685]: I1030 23:56:43.868731 2685 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 23:56:43.873501 kubelet[2685]: E1030 23:56:43.872413 2685 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 23:56:43.874068 kubelet[2685]: I1030 23:56:43.874023 2685 factory.go:221] Registration of the containerd container factory successfully Oct 30 23:56:43.951000 kubelet[2685]: I1030 23:56:43.950817 2685 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 23:56:43.951298 kubelet[2685]: I1030 23:56:43.951273 2685 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 23:56:43.951425 kubelet[2685]: I1030 23:56:43.951410 2685 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:56:43.952128 kubelet[2685]: I1030 23:56:43.952001 2685 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 23:56:43.952655 kubelet[2685]: I1030 23:56:43.952550 2685 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 23:56:43.952793 kubelet[2685]: I1030 23:56:43.952776 2685 policy_none.go:49] "None policy: Start" Oct 30 23:56:43.952886 kubelet[2685]: I1030 23:56:43.952872 2685 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 23:56:43.952996 kubelet[2685]: I1030 23:56:43.952980 2685 state_mem.go:35] "Initializing new in-memory state store" Oct 30 23:56:43.954684 kubelet[2685]: I1030 23:56:43.953297 2685 state_mem.go:75] "Updated machine memory state" Oct 30 23:56:43.954684 kubelet[2685]: E1030 23:56:43.954302 2685 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:56:43.962133 kubelet[2685]: I1030 23:56:43.962083 2685 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 23:56:43.962360 kubelet[2685]: I1030 23:56:43.962340 2685 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 23:56:43.962403 kubelet[2685]: I1030 23:56:43.962361 2685 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 23:56:43.963279 kubelet[2685]: I1030 23:56:43.963152 2685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 23:56:43.970551 kubelet[2685]: E1030 23:56:43.967991 2685 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 23:56:44.017783 sudo[2720]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 30 23:56:44.018169 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 30 23:56:44.078163 kubelet[2685]: I1030 23:56:44.077965 2685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.107897 kubelet[2685]: I1030 23:56:44.107633 2685 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.107897 kubelet[2685]: I1030 23:56:44.107792 2685 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.159633 kubelet[2685]: I1030 23:56:44.158314 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.159633 kubelet[2685]: I1030 23:56:44.158565 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.159633 kubelet[2685]: I1030 23:56:44.159168 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.252283 kubelet[2685]: I1030 23:56:44.251967 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253072 kubelet[2685]: I1030 23:56:44.253025 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8698ef9915cad84386c5a5e97817109-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" (UID: \"e8698ef9915cad84386c5a5e97817109\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253236 kubelet[2685]: I1030 23:56:44.253221 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253343 kubelet[2685]: I1030 23:56:44.253330 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253509 kubelet[2685]: I1030 23:56:44.253493 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253648 kubelet[2685]: I1030 23:56:44.253597 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dde51cac144a74479aaf11667219fa6d-kubeconfig\") pod \"kube-scheduler-ci-4230-2-4-n-ab7d00e960\" (UID: \"dde51cac144a74479aaf11667219fa6d\") " pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253648 kubelet[2685]: I1030 23:56:44.253621 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8698ef9915cad84386c5a5e97817109-ca-certs\") pod \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" (UID: \"e8698ef9915cad84386c5a5e97817109\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253793 kubelet[2685]: I1030 23:56:44.253637 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8698ef9915cad84386c5a5e97817109-k8s-certs\") pod \"kube-apiserver-ci-4230-2-4-n-ab7d00e960\" (UID: \"e8698ef9915cad84386c5a5e97817109\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.253938 kubelet[2685]: I1030 23:56:44.253886 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58449a187481756fd356a40e22bdaf95-ca-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-ab7d00e960\" (UID: \"58449a187481756fd356a40e22bdaf95\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.567981 sudo[2720]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:44.810512 kubelet[2685]: I1030 23:56:44.810250 2685 apiserver.go:52] "Watching apiserver" Oct 30 23:56:44.850488 kubelet[2685]: I1030 23:56:44.850287 2685 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 23:56:44.919319 kubelet[2685]: I1030 23:56:44.919267 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.940397 kubelet[2685]: E1030 23:56:44.939825 2685 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-4-n-ab7d00e960\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" Oct 30 23:56:44.984598 kubelet[2685]: I1030 23:56:44.984127 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-4-n-ab7d00e960" podStartSLOduration=0.984006536 podStartE2EDuration="984.006536ms" podCreationTimestamp="2025-10-30 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:56:44.982268831 +0000 UTC m=+1.282061767" watchObservedRunningTime="2025-10-30 23:56:44.984006536 +0000 UTC m=+1.283799432" Oct 30 23:56:44.986690 kubelet[2685]: I1030 23:56:44.986610 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-4-n-ab7d00e960" podStartSLOduration=0.98658409 podStartE2EDuration="986.58409ms" podCreationTimestamp="2025-10-30 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:56:44.962258572 +0000 UTC m=+1.262051508" watchObservedRunningTime="2025-10-30 23:56:44.98658409 +0000 UTC m=+1.286377026" Oct 30 23:56:45.029358 kubelet[2685]: I1030 23:56:45.029262 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-ab7d00e960" podStartSLOduration=1.029228238 podStartE2EDuration="1.029228238s" podCreationTimestamp="2025-10-30 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:56:45.004483284 +0000 UTC m=+1.304276260" watchObservedRunningTime="2025-10-30 23:56:45.029228238 +0000 UTC m=+1.329021214" Oct 30 23:56:46.573229 sudo[1760]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:46.728753 sshd[1759]: Connection closed by 139.178.89.65 port 41786 Oct 30 23:56:46.729743 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:46.736913 systemd[1]: sshd@9-91.99.146.238:22-139.178.89.65:41786.service: Deactivated successfully. Oct 30 23:56:46.742446 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 23:56:46.743212 systemd[1]: session-7.scope: Consumed 8.059s CPU time, 262.2M memory peak. Oct 30 23:56:46.747081 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Oct 30 23:56:46.749315 systemd-logind[1480]: Removed session 7. Oct 30 23:56:47.109552 kubelet[2685]: I1030 23:56:47.108962 2685 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 23:56:47.110394 containerd[1497]: time="2025-10-30T23:56:47.110164296Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 23:56:47.111600 kubelet[2685]: I1030 23:56:47.110613 2685 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 23:56:48.209501 systemd[1]: Created slice kubepods-besteffort-pod46b9380c_6b8a_48ef_97dc_c63a6529d8ca.slice - libcontainer container kubepods-besteffort-pod46b9380c_6b8a_48ef_97dc_c63a6529d8ca.slice. Oct 30 23:56:48.232833 systemd[1]: Created slice kubepods-burstable-podedd46b38_8dc5_483c_8162_68a7efe678ec.slice - libcontainer container kubepods-burstable-podedd46b38_8dc5_483c_8162_68a7efe678ec.slice. Oct 30 23:56:48.284560 kubelet[2685]: I1030 23:56:48.284411 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjp4t\" (UniqueName: \"kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-kube-api-access-hjp4t\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.284560 kubelet[2685]: I1030 23:56:48.284505 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-bpf-maps\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.284560 kubelet[2685]: I1030 23:56:48.284535 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-cgroup\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.284560 kubelet[2685]: I1030 23:56:48.284551 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cni-path\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.284560 kubelet[2685]: I1030 23:56:48.284568 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46b9380c-6b8a-48ef-97dc-c63a6529d8ca-xtables-lock\") pod \"kube-proxy-jv6f8\" (UID: \"46b9380c-6b8a-48ef-97dc-c63a6529d8ca\") " pod="kube-system/kube-proxy-jv6f8" Oct 30 23:56:48.285332 kubelet[2685]: I1030 23:56:48.284587 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46b9380c-6b8a-48ef-97dc-c63a6529d8ca-lib-modules\") pod \"kube-proxy-jv6f8\" (UID: \"46b9380c-6b8a-48ef-97dc-c63a6529d8ca\") " pod="kube-system/kube-proxy-jv6f8" Oct 30 23:56:48.285332 kubelet[2685]: I1030 23:56:48.284603 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4w7c\" (UniqueName: \"kubernetes.io/projected/46b9380c-6b8a-48ef-97dc-c63a6529d8ca-kube-api-access-b4w7c\") pod \"kube-proxy-jv6f8\" (UID: \"46b9380c-6b8a-48ef-97dc-c63a6529d8ca\") " pod="kube-system/kube-proxy-jv6f8" Oct 30 23:56:48.285332 kubelet[2685]: I1030 23:56:48.284622 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-kernel\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285332 kubelet[2685]: I1030 23:56:48.284638 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-hubble-tls\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285332 kubelet[2685]: I1030 23:56:48.284661 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-etc-cni-netd\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285546 kubelet[2685]: I1030 23:56:48.284681 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-xtables-lock\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285546 kubelet[2685]: I1030 23:56:48.284698 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-net\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285546 kubelet[2685]: I1030 23:56:48.284714 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-hostproc\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285546 kubelet[2685]: I1030 23:56:48.284729 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-lib-modules\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285546 kubelet[2685]: I1030 23:56:48.284744 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-config-path\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285546 kubelet[2685]: I1030 23:56:48.284759 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46b9380c-6b8a-48ef-97dc-c63a6529d8ca-kube-proxy\") pod \"kube-proxy-jv6f8\" (UID: \"46b9380c-6b8a-48ef-97dc-c63a6529d8ca\") " pod="kube-system/kube-proxy-jv6f8" Oct 30 23:56:48.285746 kubelet[2685]: I1030 23:56:48.284775 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-run\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.285746 kubelet[2685]: I1030 23:56:48.284792 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edd46b38-8dc5-483c-8162-68a7efe678ec-clustermesh-secrets\") pod \"cilium-t5pws\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " pod="kube-system/cilium-t5pws" Oct 30 23:56:48.335427 systemd[1]: Created slice kubepods-besteffort-pod408159b7_4f70_4ef6_9b26_9e1565e3a2ea.slice - libcontainer container kubepods-besteffort-pod408159b7_4f70_4ef6_9b26_9e1565e3a2ea.slice. Oct 30 23:56:48.385092 kubelet[2685]: I1030 23:56:48.385042 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wd6jh\" (UID: \"408159b7-4f70-4ef6-9b26-9e1565e3a2ea\") " pod="kube-system/cilium-operator-6c4d7847fc-wd6jh" Oct 30 23:56:48.386476 kubelet[2685]: I1030 23:56:48.386130 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrb4w\" (UniqueName: \"kubernetes.io/projected/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-kube-api-access-nrb4w\") pod \"cilium-operator-6c4d7847fc-wd6jh\" (UID: \"408159b7-4f70-4ef6-9b26-9e1565e3a2ea\") " pod="kube-system/cilium-operator-6c4d7847fc-wd6jh" Oct 30 23:56:48.524147 containerd[1497]: time="2025-10-30T23:56:48.522064103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv6f8,Uid:46b9380c-6b8a-48ef-97dc-c63a6529d8ca,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:48.543837 containerd[1497]: time="2025-10-30T23:56:48.543636523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t5pws,Uid:edd46b38-8dc5-483c-8162-68a7efe678ec,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:48.576929 containerd[1497]: time="2025-10-30T23:56:48.576771350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:56:48.576929 containerd[1497]: time="2025-10-30T23:56:48.576849640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:56:48.576929 containerd[1497]: time="2025-10-30T23:56:48.576862721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:48.577373 containerd[1497]: time="2025-10-30T23:56:48.577028782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:48.592915 containerd[1497]: time="2025-10-30T23:56:48.592351500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:56:48.592915 containerd[1497]: time="2025-10-30T23:56:48.592427829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:56:48.592915 containerd[1497]: time="2025-10-30T23:56:48.592440151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:48.592915 containerd[1497]: time="2025-10-30T23:56:48.592553565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:48.605840 systemd[1]: Started cri-containerd-a21ce080acc07691416f343c054c5938dc757ef3840270ff084ae2e053a6b88f.scope - libcontainer container a21ce080acc07691416f343c054c5938dc757ef3840270ff084ae2e053a6b88f. Oct 30 23:56:48.630118 systemd[1]: Started cri-containerd-3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9.scope - libcontainer container 3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9. Oct 30 23:56:48.645828 containerd[1497]: time="2025-10-30T23:56:48.645430502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wd6jh,Uid:408159b7-4f70-4ef6-9b26-9e1565e3a2ea,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:48.656317 containerd[1497]: time="2025-10-30T23:56:48.656253777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv6f8,Uid:46b9380c-6b8a-48ef-97dc-c63a6529d8ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"a21ce080acc07691416f343c054c5938dc757ef3840270ff084ae2e053a6b88f\"" Oct 30 23:56:48.667838 containerd[1497]: time="2025-10-30T23:56:48.667572633Z" level=info msg="CreateContainer within sandbox \"a21ce080acc07691416f343c054c5938dc757ef3840270ff084ae2e053a6b88f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 23:56:48.703437 containerd[1497]: time="2025-10-30T23:56:48.703351631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t5pws,Uid:edd46b38-8dc5-483c-8162-68a7efe678ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\"" Oct 30 23:56:48.709940 containerd[1497]: time="2025-10-30T23:56:48.708516358Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 30 23:56:48.717987 containerd[1497]: time="2025-10-30T23:56:48.717651781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:56:48.717987 containerd[1497]: time="2025-10-30T23:56:48.717744472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:56:48.717987 containerd[1497]: time="2025-10-30T23:56:48.717768555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:48.717987 containerd[1497]: time="2025-10-30T23:56:48.717879569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:56:48.725716 containerd[1497]: time="2025-10-30T23:56:48.725646501Z" level=info msg="CreateContainer within sandbox \"a21ce080acc07691416f343c054c5938dc757ef3840270ff084ae2e053a6b88f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d70b8defd3d960c51d8808108cc108dbedf70a6320d056e6bb32af439ccab5c8\"" Oct 30 23:56:48.729611 containerd[1497]: time="2025-10-30T23:56:48.729540669Z" level=info msg="StartContainer for \"d70b8defd3d960c51d8808108cc108dbedf70a6320d056e6bb32af439ccab5c8\"" Oct 30 23:56:48.754895 systemd[1]: Started cri-containerd-35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e.scope - libcontainer container 35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e. Oct 30 23:56:48.791638 systemd[1]: Started cri-containerd-d70b8defd3d960c51d8808108cc108dbedf70a6320d056e6bb32af439ccab5c8.scope - libcontainer container d70b8defd3d960c51d8808108cc108dbedf70a6320d056e6bb32af439ccab5c8. Oct 30 23:56:48.828877 containerd[1497]: time="2025-10-30T23:56:48.828826454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wd6jh,Uid:408159b7-4f70-4ef6-9b26-9e1565e3a2ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\"" Oct 30 23:56:48.861425 containerd[1497]: time="2025-10-30T23:56:48.861251912Z" level=info msg="StartContainer for \"d70b8defd3d960c51d8808108cc108dbedf70a6320d056e6bb32af439ccab5c8\" returns successfully" Oct 30 23:56:48.960528 kubelet[2685]: I1030 23:56:48.959758 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jv6f8" podStartSLOduration=0.959727876 podStartE2EDuration="959.727876ms" podCreationTimestamp="2025-10-30 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:56:48.959135562 +0000 UTC m=+5.258928538" watchObservedRunningTime="2025-10-30 23:56:48.959727876 +0000 UTC m=+5.259520812" Oct 30 23:56:53.085183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251997612.mount: Deactivated successfully. Oct 30 23:56:54.729544 containerd[1497]: time="2025-10-30T23:56:54.729416841Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:54.733325 containerd[1497]: time="2025-10-30T23:56:54.733241564Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Oct 30 23:56:54.735608 containerd[1497]: time="2025-10-30T23:56:54.735506579Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:54.737726 containerd[1497]: time="2025-10-30T23:56:54.737535932Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.028948206s" Oct 30 23:56:54.737726 containerd[1497]: time="2025-10-30T23:56:54.737596457Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 30 23:56:54.742138 containerd[1497]: time="2025-10-30T23:56:54.741645962Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 30 23:56:54.744550 containerd[1497]: time="2025-10-30T23:56:54.744499033Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 30 23:56:54.769121 containerd[1497]: time="2025-10-30T23:56:54.769052683Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\"" Oct 30 23:56:54.771239 containerd[1497]: time="2025-10-30T23:56:54.770082141Z" level=info msg="StartContainer for \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\"" Oct 30 23:56:54.811980 systemd[1]: Started cri-containerd-c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3.scope - libcontainer container c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3. Oct 30 23:56:54.854540 containerd[1497]: time="2025-10-30T23:56:54.853435973Z" level=info msg="StartContainer for \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\" returns successfully" Oct 30 23:56:54.871346 systemd[1]: cri-containerd-c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3.scope: Deactivated successfully. Oct 30 23:56:54.999984 containerd[1497]: time="2025-10-30T23:56:54.999748941Z" level=info msg="shim disconnected" id=c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3 namespace=k8s.io Oct 30 23:56:54.999984 containerd[1497]: time="2025-10-30T23:56:54.999827748Z" level=warning msg="cleaning up after shim disconnected" id=c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3 namespace=k8s.io Oct 30 23:56:54.999984 containerd[1497]: time="2025-10-30T23:56:54.999837349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:56:55.761100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3-rootfs.mount: Deactivated successfully. Oct 30 23:56:55.984063 containerd[1497]: time="2025-10-30T23:56:55.983385580Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 30 23:56:56.016801 containerd[1497]: time="2025-10-30T23:56:56.016437365Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\"" Oct 30 23:56:56.018144 containerd[1497]: time="2025-10-30T23:56:56.018039705Z" level=info msg="StartContainer for \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\"" Oct 30 23:56:56.068834 systemd[1]: Started cri-containerd-4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2.scope - libcontainer container 4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2. Oct 30 23:56:56.123635 containerd[1497]: time="2025-10-30T23:56:56.123520420Z" level=info msg="StartContainer for \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\" returns successfully" Oct 30 23:56:56.148266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 23:56:56.148948 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:56:56.150210 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:56:56.161376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:56:56.166098 systemd[1]: cri-containerd-4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2.scope: Deactivated successfully. Oct 30 23:56:56.207935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:56:56.226358 containerd[1497]: time="2025-10-30T23:56:56.226264778Z" level=info msg="shim disconnected" id=4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2 namespace=k8s.io Oct 30 23:56:56.226890 containerd[1497]: time="2025-10-30T23:56:56.226851829Z" level=warning msg="cleaning up after shim disconnected" id=4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2 namespace=k8s.io Oct 30 23:56:56.227158 containerd[1497]: time="2025-10-30T23:56:56.227132213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:56:56.725722 containerd[1497]: time="2025-10-30T23:56:56.725652953Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:56.726755 containerd[1497]: time="2025-10-30T23:56:56.726691564Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Oct 30 23:56:56.728732 containerd[1497]: time="2025-10-30T23:56:56.728680857Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:56.731066 containerd[1497]: time="2025-10-30T23:56:56.730245674Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.988528906s" Oct 30 23:56:56.731066 containerd[1497]: time="2025-10-30T23:56:56.730302079Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 30 23:56:56.736223 containerd[1497]: time="2025-10-30T23:56:56.736141188Z" level=info msg="CreateContainer within sandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 30 23:56:56.761003 systemd[1]: run-containerd-runc-k8s.io-4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2-runc.f3fWy3.mount: Deactivated successfully. Oct 30 23:56:56.761641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2-rootfs.mount: Deactivated successfully. Oct 30 23:56:56.776767 containerd[1497]: time="2025-10-30T23:56:56.776410538Z" level=info msg="CreateContainer within sandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\"" Oct 30 23:56:56.781704 containerd[1497]: time="2025-10-30T23:56:56.781142431Z" level=info msg="StartContainer for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\"" Oct 30 23:56:56.822030 systemd[1]: run-containerd-runc-k8s.io-92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609-runc.OVXnIF.mount: Deactivated successfully. Oct 30 23:56:56.834786 systemd[1]: Started cri-containerd-92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609.scope - libcontainer container 92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609. Oct 30 23:56:56.872333 containerd[1497]: time="2025-10-30T23:56:56.872153245Z" level=info msg="StartContainer for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" returns successfully" Oct 30 23:56:56.999437 containerd[1497]: time="2025-10-30T23:56:56.999175799Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 30 23:56:57.044274 containerd[1497]: time="2025-10-30T23:56:57.044103002Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\"" Oct 30 23:56:57.046895 containerd[1497]: time="2025-10-30T23:56:57.046261623Z" level=info msg="StartContainer for \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\"" Oct 30 23:56:57.116418 kubelet[2685]: I1030 23:56:57.116236 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wd6jh" podStartSLOduration=1.217181867 podStartE2EDuration="9.116197514s" podCreationTimestamp="2025-10-30 23:56:48 +0000 UTC" firstStartedPulling="2025-10-30 23:56:48.833133353 +0000 UTC m=+5.132926289" lastFinishedPulling="2025-10-30 23:56:56.732149 +0000 UTC m=+13.031941936" observedRunningTime="2025-10-30 23:56:57.027800638 +0000 UTC m=+13.327593654" watchObservedRunningTime="2025-10-30 23:56:57.116197514 +0000 UTC m=+13.415990450" Oct 30 23:56:57.122834 systemd[1]: Started cri-containerd-dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34.scope - libcontainer container dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34. Oct 30 23:56:57.180522 containerd[1497]: time="2025-10-30T23:56:57.179941047Z" level=info msg="StartContainer for \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\" returns successfully" Oct 30 23:56:57.199263 systemd[1]: cri-containerd-dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34.scope: Deactivated successfully. Oct 30 23:56:57.281052 containerd[1497]: time="2025-10-30T23:56:57.280812007Z" level=info msg="shim disconnected" id=dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34 namespace=k8s.io Oct 30 23:56:57.281052 containerd[1497]: time="2025-10-30T23:56:57.280914576Z" level=warning msg="cleaning up after shim disconnected" id=dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34 namespace=k8s.io Oct 30 23:56:57.281052 containerd[1497]: time="2025-10-30T23:56:57.280926737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:56:58.005158 containerd[1497]: time="2025-10-30T23:56:58.004906175Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 30 23:56:58.031949 containerd[1497]: time="2025-10-30T23:56:58.031767094Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\"" Oct 30 23:56:58.033844 containerd[1497]: time="2025-10-30T23:56:58.033797057Z" level=info msg="StartContainer for \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\"" Oct 30 23:56:58.088622 systemd[1]: Started cri-containerd-a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb.scope - libcontainer container a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb. Oct 30 23:56:58.128759 systemd[1]: cri-containerd-a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb.scope: Deactivated successfully. Oct 30 23:56:58.138608 containerd[1497]: time="2025-10-30T23:56:58.137932667Z" level=info msg="StartContainer for \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\" returns successfully" Oct 30 23:56:58.178084 containerd[1497]: time="2025-10-30T23:56:58.177878038Z" level=info msg="shim disconnected" id=a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb namespace=k8s.io Oct 30 23:56:58.178084 containerd[1497]: time="2025-10-30T23:56:58.177939163Z" level=warning msg="cleaning up after shim disconnected" id=a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb namespace=k8s.io Oct 30 23:56:58.178084 containerd[1497]: time="2025-10-30T23:56:58.177948163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:56:58.763538 systemd[1]: run-containerd-runc-k8s.io-a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb-runc.Htjb59.mount: Deactivated successfully. Oct 30 23:56:58.763662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb-rootfs.mount: Deactivated successfully. Oct 30 23:56:59.019964 containerd[1497]: time="2025-10-30T23:56:59.019550628Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 30 23:56:59.089114 containerd[1497]: time="2025-10-30T23:56:59.088902388Z" level=info msg="CreateContainer within sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\"" Oct 30 23:56:59.091213 containerd[1497]: time="2025-10-30T23:56:59.089694130Z" level=info msg="StartContainer for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\"" Oct 30 23:56:59.130208 systemd[1]: Started cri-containerd-3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2.scope - libcontainer container 3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2. Oct 30 23:56:59.191383 containerd[1497]: time="2025-10-30T23:56:59.191321944Z" level=info msg="StartContainer for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" returns successfully" Oct 30 23:56:59.357295 kubelet[2685]: I1030 23:56:59.357125 2685 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 23:56:59.431986 systemd[1]: Created slice kubepods-burstable-podc5711fac_89b5_4005_9e60_a61808a6c604.slice - libcontainer container kubepods-burstable-podc5711fac_89b5_4005_9e60_a61808a6c604.slice. Oct 30 23:56:59.443566 systemd[1]: Created slice kubepods-burstable-podc1cba14e_77d5_4360_9b46_f1e0cb9579a9.slice - libcontainer container kubepods-burstable-podc1cba14e_77d5_4360_9b46_f1e0cb9579a9.slice. Oct 30 23:56:59.474355 kubelet[2685]: I1030 23:56:59.474308 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45r9r\" (UniqueName: \"kubernetes.io/projected/c1cba14e-77d5-4360-9b46-f1e0cb9579a9-kube-api-access-45r9r\") pod \"coredns-668d6bf9bc-8rpff\" (UID: \"c1cba14e-77d5-4360-9b46-f1e0cb9579a9\") " pod="kube-system/coredns-668d6bf9bc-8rpff" Oct 30 23:56:59.474721 kubelet[2685]: I1030 23:56:59.474600 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1cba14e-77d5-4360-9b46-f1e0cb9579a9-config-volume\") pod \"coredns-668d6bf9bc-8rpff\" (UID: \"c1cba14e-77d5-4360-9b46-f1e0cb9579a9\") " pod="kube-system/coredns-668d6bf9bc-8rpff" Oct 30 23:56:59.474721 kubelet[2685]: I1030 23:56:59.474641 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5711fac-89b5-4005-9e60-a61808a6c604-config-volume\") pod \"coredns-668d6bf9bc-gdzrp\" (UID: \"c5711fac-89b5-4005-9e60-a61808a6c604\") " pod="kube-system/coredns-668d6bf9bc-gdzrp" Oct 30 23:56:59.474721 kubelet[2685]: I1030 23:56:59.474660 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxfv\" (UniqueName: \"kubernetes.io/projected/c5711fac-89b5-4005-9e60-a61808a6c604-kube-api-access-6xxfv\") pod \"coredns-668d6bf9bc-gdzrp\" (UID: \"c5711fac-89b5-4005-9e60-a61808a6c604\") " pod="kube-system/coredns-668d6bf9bc-gdzrp" Oct 30 23:56:59.739739 containerd[1497]: time="2025-10-30T23:56:59.739633283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdzrp,Uid:c5711fac-89b5-4005-9e60-a61808a6c604,Namespace:kube-system,Attempt:0,}" Oct 30 23:56:59.753850 containerd[1497]: time="2025-10-30T23:56:59.752731455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rpff,Uid:c1cba14e-77d5-4360-9b46-f1e0cb9579a9,Namespace:kube-system,Attempt:0,}" Oct 30 23:57:01.561932 systemd-networkd[1396]: cilium_host: Link UP Oct 30 23:57:01.563309 systemd-networkd[1396]: cilium_net: Link UP Oct 30 23:57:01.563955 systemd-networkd[1396]: cilium_net: Gained carrier Oct 30 23:57:01.564130 systemd-networkd[1396]: cilium_host: Gained carrier Oct 30 23:57:01.721556 systemd-networkd[1396]: cilium_vxlan: Link UP Oct 30 23:57:01.721569 systemd-networkd[1396]: cilium_vxlan: Gained carrier Oct 30 23:57:01.825665 systemd-networkd[1396]: cilium_host: Gained IPv6LL Oct 30 23:57:01.977104 systemd-networkd[1396]: cilium_net: Gained IPv6LL Oct 30 23:57:02.079743 kernel: NET: Registered PF_ALG protocol family Oct 30 23:57:02.928810 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Oct 30 23:57:03.020289 systemd-networkd[1396]: lxc_health: Link UP Oct 30 23:57:03.036138 systemd-networkd[1396]: lxc_health: Gained carrier Oct 30 23:57:03.381593 kernel: eth0: renamed from tmp2d9a2 Oct 30 23:57:03.388774 kernel: eth0: renamed from tmp06bbc Oct 30 23:57:03.388422 systemd-networkd[1396]: lxcce40998d9dd1: Link UP Oct 30 23:57:03.404135 systemd-networkd[1396]: lxc8982ccf63428: Link UP Oct 30 23:57:03.406814 systemd-networkd[1396]: lxc8982ccf63428: Gained carrier Oct 30 23:57:03.408152 systemd-networkd[1396]: lxcce40998d9dd1: Gained carrier Oct 30 23:57:04.466580 systemd-networkd[1396]: lxc8982ccf63428: Gained IPv6LL Oct 30 23:57:04.576691 kubelet[2685]: I1030 23:57:04.576598 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t5pws" podStartSLOduration=10.544871734 podStartE2EDuration="16.576568587s" podCreationTimestamp="2025-10-30 23:56:48 +0000 UTC" firstStartedPulling="2025-10-30 23:56:48.707723458 +0000 UTC m=+5.007516394" lastFinishedPulling="2025-10-30 23:56:54.739420351 +0000 UTC m=+11.039213247" observedRunningTime="2025-10-30 23:57:00.051190923 +0000 UTC m=+16.350983899" watchObservedRunningTime="2025-10-30 23:57:04.576568587 +0000 UTC m=+20.876361563" Oct 30 23:57:04.784905 systemd-networkd[1396]: lxcce40998d9dd1: Gained IPv6LL Oct 30 23:57:04.976780 systemd-networkd[1396]: lxc_health: Gained IPv6LL Oct 30 23:57:07.222123 kubelet[2685]: I1030 23:57:07.220973 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 23:57:08.544113 containerd[1497]: time="2025-10-30T23:57:08.541853047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:57:08.544113 containerd[1497]: time="2025-10-30T23:57:08.543761595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:57:08.544113 containerd[1497]: time="2025-10-30T23:57:08.543780516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:08.544113 containerd[1497]: time="2025-10-30T23:57:08.543914204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:08.590785 systemd[1]: Started cri-containerd-06bbcf0009698eb0e24cdb85c357d7a118cd132c577517381a879f4be67e105a.scope - libcontainer container 06bbcf0009698eb0e24cdb85c357d7a118cd132c577517381a879f4be67e105a. Oct 30 23:57:08.617161 containerd[1497]: time="2025-10-30T23:57:08.617023363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:57:08.617161 containerd[1497]: time="2025-10-30T23:57:08.617100808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:57:08.617161 containerd[1497]: time="2025-10-30T23:57:08.617117369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:08.617462 containerd[1497]: time="2025-10-30T23:57:08.617261057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:08.662871 systemd[1]: Started cri-containerd-2d9a29b142cb0fff7c43afb092b04467de806d66a15882dff320f1405a28836f.scope - libcontainer container 2d9a29b142cb0fff7c43afb092b04467de806d66a15882dff320f1405a28836f. Oct 30 23:57:08.693789 containerd[1497]: time="2025-10-30T23:57:08.693723847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdzrp,Uid:c5711fac-89b5-4005-9e60-a61808a6c604,Namespace:kube-system,Attempt:0,} returns sandbox id \"06bbcf0009698eb0e24cdb85c357d7a118cd132c577517381a879f4be67e105a\"" Oct 30 23:57:08.702318 containerd[1497]: time="2025-10-30T23:57:08.702253492Z" level=info msg="CreateContainer within sandbox \"06bbcf0009698eb0e24cdb85c357d7a118cd132c577517381a879f4be67e105a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 23:57:08.734097 containerd[1497]: time="2025-10-30T23:57:08.733377663Z" level=info msg="CreateContainer within sandbox \"06bbcf0009698eb0e24cdb85c357d7a118cd132c577517381a879f4be67e105a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4381b6e524950d9f7ab44f1601019d8097ebf5fb523ab70318abc8ed41f10388\"" Oct 30 23:57:08.737341 containerd[1497]: time="2025-10-30T23:57:08.735115282Z" level=info msg="StartContainer for \"4381b6e524950d9f7ab44f1601019d8097ebf5fb523ab70318abc8ed41f10388\"" Oct 30 23:57:08.751285 containerd[1497]: time="2025-10-30T23:57:08.751122632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rpff,Uid:c1cba14e-77d5-4360-9b46-f1e0cb9579a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9a29b142cb0fff7c43afb092b04467de806d66a15882dff320f1405a28836f\"" Oct 30 23:57:08.757834 containerd[1497]: time="2025-10-30T23:57:08.757756290Z" level=info msg="CreateContainer within sandbox \"2d9a29b142cb0fff7c43afb092b04467de806d66a15882dff320f1405a28836f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 23:57:08.797692 containerd[1497]: time="2025-10-30T23:57:08.794351772Z" level=info msg="CreateContainer within sandbox \"2d9a29b142cb0fff7c43afb092b04467de806d66a15882dff320f1405a28836f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6730f62b5bdeb5d69bc041031664da625cc720df4166a19844f60e512f5f1fb6\"" Oct 30 23:57:08.795939 systemd[1]: Started cri-containerd-4381b6e524950d9f7ab44f1601019d8097ebf5fb523ab70318abc8ed41f10388.scope - libcontainer container 4381b6e524950d9f7ab44f1601019d8097ebf5fb523ab70318abc8ed41f10388. Oct 30 23:57:08.800158 containerd[1497]: time="2025-10-30T23:57:08.799498945Z" level=info msg="StartContainer for \"6730f62b5bdeb5d69bc041031664da625cc720df4166a19844f60e512f5f1fb6\"" Oct 30 23:57:08.853787 systemd[1]: Started cri-containerd-6730f62b5bdeb5d69bc041031664da625cc720df4166a19844f60e512f5f1fb6.scope - libcontainer container 6730f62b5bdeb5d69bc041031664da625cc720df4166a19844f60e512f5f1fb6. Oct 30 23:57:08.875830 containerd[1497]: time="2025-10-30T23:57:08.875760603Z" level=info msg="StartContainer for \"4381b6e524950d9f7ab44f1601019d8097ebf5fb523ab70318abc8ed41f10388\" returns successfully" Oct 30 23:57:08.922430 containerd[1497]: time="2025-10-30T23:57:08.922150122Z" level=info msg="StartContainer for \"6730f62b5bdeb5d69bc041031664da625cc720df4166a19844f60e512f5f1fb6\" returns successfully" Oct 30 23:57:09.082537 kubelet[2685]: I1030 23:57:09.081334 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8rpff" podStartSLOduration=21.081311326 podStartE2EDuration="21.081311326s" podCreationTimestamp="2025-10-30 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:57:09.076866161 +0000 UTC m=+25.376659137" watchObservedRunningTime="2025-10-30 23:57:09.081311326 +0000 UTC m=+25.381104262" Oct 30 23:58:50.757080 update_engine[1481]: I20251030 23:58:50.754863 1481 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 30 23:58:50.757080 update_engine[1481]: I20251030 23:58:50.754914 1481 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 30 23:58:50.757080 update_engine[1481]: I20251030 23:58:50.755174 1481 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759297 1481 omaha_request_params.cc:62] Current group set to stable Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759439 1481 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759465 1481 update_attempter.cc:643] Scheduling an action processor start. Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759486 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759527 1481 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759600 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759608 1481 omaha_request_action.cc:272] Request: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: Oct 30 23:58:50.760437 update_engine[1481]: I20251030 23:58:50.759613 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 30 23:58:50.761239 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 30 23:58:50.765616 update_engine[1481]: I20251030 23:58:50.765158 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 30 23:58:50.766787 update_engine[1481]: I20251030 23:58:50.766093 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 30 23:58:50.771012 update_engine[1481]: E20251030 23:58:50.769871 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 30 23:58:50.771012 update_engine[1481]: I20251030 23:58:50.770916 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 30 23:58:52.510255 systemd[1]: Started sshd@10-91.99.146.238:22-103.165.139.150:58884.service - OpenSSH per-connection server daemon (103.165.139.150:58884). Oct 30 23:58:53.610916 sshd[4085]: Invalid user lmh from 103.165.139.150 port 58884 Oct 30 23:58:53.816133 sshd[4085]: Received disconnect from 103.165.139.150 port 58884:11: Bye Bye [preauth] Oct 30 23:58:53.816133 sshd[4085]: Disconnected from invalid user lmh 103.165.139.150 port 58884 [preauth] Oct 30 23:58:53.819385 systemd[1]: sshd@10-91.99.146.238:22-103.165.139.150:58884.service: Deactivated successfully. Oct 30 23:59:00.666356 update_engine[1481]: I20251030 23:59:00.665654 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 30 23:59:00.666356 update_engine[1481]: I20251030 23:59:00.665982 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 30 23:59:00.666356 update_engine[1481]: I20251030 23:59:00.666266 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 30 23:59:00.667157 update_engine[1481]: E20251030 23:59:00.667121 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 30 23:59:00.667278 update_engine[1481]: I20251030 23:59:00.667259 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 30 23:59:07.197946 systemd[1]: Started sshd@11-91.99.146.238:22-139.178.89.65:51320.service - OpenSSH per-connection server daemon (139.178.89.65:51320). Oct 30 23:59:08.159413 sshd[4090]: Accepted publickey for core from 139.178.89.65 port 51320 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:08.162241 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:08.173387 systemd-logind[1480]: New session 8 of user core. Oct 30 23:59:08.182752 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 23:59:08.927192 sshd[4092]: Connection closed by 139.178.89.65 port 51320 Oct 30 23:59:08.928120 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:08.934288 systemd[1]: sshd@11-91.99.146.238:22-139.178.89.65:51320.service: Deactivated successfully. Oct 30 23:59:08.941068 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 23:59:08.943054 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Oct 30 23:59:08.944270 systemd-logind[1480]: Removed session 8. Oct 30 23:59:10.659537 update_engine[1481]: I20251030 23:59:10.658802 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 30 23:59:10.659537 update_engine[1481]: I20251030 23:59:10.659265 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 30 23:59:10.661187 update_engine[1481]: I20251030 23:59:10.660795 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 30 23:59:10.661187 update_engine[1481]: E20251030 23:59:10.660979 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 30 23:59:10.661553 update_engine[1481]: I20251030 23:59:10.661054 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 30 23:59:14.101734 systemd[1]: Started sshd@12-91.99.146.238:22-139.178.89.65:51324.service - OpenSSH per-connection server daemon (139.178.89.65:51324). Oct 30 23:59:15.035607 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 51324 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:15.038140 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:15.045419 systemd-logind[1480]: New session 9 of user core. Oct 30 23:59:15.050688 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 23:59:15.764008 sshd[4107]: Connection closed by 139.178.89.65 port 51324 Oct 30 23:59:15.764646 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:15.768941 systemd[1]: sshd@12-91.99.146.238:22-139.178.89.65:51324.service: Deactivated successfully. Oct 30 23:59:15.772008 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 23:59:15.773481 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Oct 30 23:59:15.775742 systemd-logind[1480]: Removed session 9. Oct 30 23:59:20.662144 update_engine[1481]: I20251030 23:59:20.661612 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 30 23:59:20.662144 update_engine[1481]: I20251030 23:59:20.661854 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 30 23:59:20.662791 update_engine[1481]: I20251030 23:59:20.662734 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 30 23:59:20.663266 update_engine[1481]: E20251030 23:59:20.663031 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663097 1481 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663105 1481 omaha_request_action.cc:617] Omaha request response: Oct 30 23:59:20.663266 update_engine[1481]: E20251030 23:59:20.663180 1481 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663197 1481 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663203 1481 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663208 1481 update_attempter.cc:306] Processing Done. Oct 30 23:59:20.663266 update_engine[1481]: E20251030 23:59:20.663221 1481 update_attempter.cc:619] Update failed. Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663226 1481 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 30 23:59:20.663266 update_engine[1481]: I20251030 23:59:20.663231 1481 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.663236 1481 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.664079 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.664111 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.664117 1481 omaha_request_action.cc:272] Request: Oct 30 23:59:20.664571 update_engine[1481]: Oct 30 23:59:20.664571 update_engine[1481]: Oct 30 23:59:20.664571 update_engine[1481]: Oct 30 23:59:20.664571 update_engine[1481]: Oct 30 23:59:20.664571 update_engine[1481]: Oct 30 23:59:20.664571 update_engine[1481]: Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.664124 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.664265 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 30 23:59:20.664571 update_engine[1481]: I20251030 23:59:20.664472 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 30 23:59:20.665224 update_engine[1481]: E20251030 23:59:20.665073 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665132 1481 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665140 1481 omaha_request_action.cc:617] Omaha request response: Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665146 1481 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665151 1481 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665155 1481 update_attempter.cc:306] Processing Done. Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665162 1481 update_attempter.cc:310] Error event sent. Oct 30 23:59:20.665224 update_engine[1481]: I20251030 23:59:20.665170 1481 update_check_scheduler.cc:74] Next update check in 45m49s Oct 30 23:59:20.665411 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 30 23:59:20.665946 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 30 23:59:20.953105 systemd[1]: Started sshd@13-91.99.146.238:22-139.178.89.65:50912.service - OpenSSH per-connection server daemon (139.178.89.65:50912). Oct 30 23:59:21.919710 sshd[4122]: Accepted publickey for core from 139.178.89.65 port 50912 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:21.922826 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:21.931442 systemd-logind[1480]: New session 10 of user core. Oct 30 23:59:21.937911 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 23:59:22.667865 sshd[4124]: Connection closed by 139.178.89.65 port 50912 Oct 30 23:59:22.668439 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:22.674271 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Oct 30 23:59:22.675418 systemd[1]: sshd@13-91.99.146.238:22-139.178.89.65:50912.service: Deactivated successfully. Oct 30 23:59:22.679206 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 23:59:22.680414 systemd-logind[1480]: Removed session 10. Oct 30 23:59:23.563858 systemd[1]: Started sshd@14-91.99.146.238:22-152.89.168.4:55132.service - OpenSSH per-connection server daemon (152.89.168.4:55132). Oct 30 23:59:24.034035 systemd[1]: Started sshd@15-91.99.146.238:22-83.118.24.18:36246.service - OpenSSH per-connection server daemon (83.118.24.18:36246). Oct 30 23:59:24.426332 sshd[4136]: Invalid user hadiysr from 152.89.168.4 port 55132 Oct 30 23:59:24.613975 sshd[4136]: Received disconnect from 152.89.168.4 port 55132:11: Bye Bye [preauth] Oct 30 23:59:24.613975 sshd[4136]: Disconnected from invalid user hadiysr 152.89.168.4 port 55132 [preauth] Oct 30 23:59:24.617188 systemd[1]: sshd@14-91.99.146.238:22-152.89.168.4:55132.service: Deactivated successfully. Oct 30 23:59:27.314587 sshd[4139]: Invalid user jstaff from 83.118.24.18 port 36246 Oct 30 23:59:27.504801 sshd[4139]: Received disconnect from 83.118.24.18 port 36246:11: Bye Bye [preauth] Oct 30 23:59:27.506405 sshd[4139]: Disconnected from invalid user jstaff 83.118.24.18 port 36246 [preauth] Oct 30 23:59:27.508285 systemd[1]: sshd@15-91.99.146.238:22-83.118.24.18:36246.service: Deactivated successfully. Oct 30 23:59:27.837001 systemd[1]: Started sshd@16-91.99.146.238:22-139.178.89.65:51576.service - OpenSSH per-connection server daemon (139.178.89.65:51576). Oct 30 23:59:28.788250 sshd[4146]: Accepted publickey for core from 139.178.89.65 port 51576 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:28.790540 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:28.799196 systemd-logind[1480]: New session 11 of user core. Oct 30 23:59:28.806254 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 23:59:29.520572 sshd[4148]: Connection closed by 139.178.89.65 port 51576 Oct 30 23:59:29.521369 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:29.525666 systemd[1]: sshd@16-91.99.146.238:22-139.178.89.65:51576.service: Deactivated successfully. Oct 30 23:59:29.528166 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 23:59:29.531269 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Oct 30 23:59:29.532924 systemd-logind[1480]: Removed session 11. Oct 30 23:59:29.690822 systemd[1]: Started sshd@17-91.99.146.238:22-139.178.89.65:51586.service - OpenSSH per-connection server daemon (139.178.89.65:51586). Oct 30 23:59:30.646142 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 51586 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:30.648891 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:30.655179 systemd-logind[1480]: New session 12 of user core. Oct 30 23:59:30.658670 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 23:59:31.420914 sshd[4163]: Connection closed by 139.178.89.65 port 51586 Oct 30 23:59:31.422187 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:31.428235 systemd[1]: sshd@17-91.99.146.238:22-139.178.89.65:51586.service: Deactivated successfully. Oct 30 23:59:31.430726 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 23:59:31.431977 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Oct 30 23:59:31.433695 systemd-logind[1480]: Removed session 12. Oct 30 23:59:31.600011 systemd[1]: Started sshd@18-91.99.146.238:22-139.178.89.65:51594.service - OpenSSH per-connection server daemon (139.178.89.65:51594). Oct 30 23:59:32.547085 sshd[4173]: Accepted publickey for core from 139.178.89.65 port 51594 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:32.549032 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:32.555967 systemd-logind[1480]: New session 13 of user core. Oct 30 23:59:32.566268 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 23:59:33.293978 sshd[4175]: Connection closed by 139.178.89.65 port 51594 Oct 30 23:59:33.294749 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:33.299611 systemd[1]: sshd@18-91.99.146.238:22-139.178.89.65:51594.service: Deactivated successfully. Oct 30 23:59:33.302902 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 23:59:33.305514 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Oct 30 23:59:33.306790 systemd-logind[1480]: Removed session 13. Oct 30 23:59:38.468859 systemd[1]: Started sshd@19-91.99.146.238:22-139.178.89.65:48040.service - OpenSSH per-connection server daemon (139.178.89.65:48040). Oct 30 23:59:39.423515 sshd[4188]: Accepted publickey for core from 139.178.89.65 port 48040 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:39.425852 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:39.435705 systemd-logind[1480]: New session 14 of user core. Oct 30 23:59:39.443856 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 23:59:40.176117 sshd[4190]: Connection closed by 139.178.89.65 port 48040 Oct 30 23:59:40.177513 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:40.186246 systemd[1]: sshd@19-91.99.146.238:22-139.178.89.65:48040.service: Deactivated successfully. Oct 30 23:59:40.191751 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 23:59:40.194878 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Oct 30 23:59:40.197983 systemd-logind[1480]: Removed session 14. Oct 30 23:59:40.357380 systemd[1]: Started sshd@20-91.99.146.238:22-139.178.89.65:48044.service - OpenSSH per-connection server daemon (139.178.89.65:48044). Oct 30 23:59:41.337046 sshd[4202]: Accepted publickey for core from 139.178.89.65 port 48044 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:41.339657 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:41.345791 systemd-logind[1480]: New session 15 of user core. Oct 30 23:59:41.358753 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 23:59:42.161118 sshd[4204]: Connection closed by 139.178.89.65 port 48044 Oct 30 23:59:42.162153 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:42.167300 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Oct 30 23:59:42.167743 systemd[1]: sshd@20-91.99.146.238:22-139.178.89.65:48044.service: Deactivated successfully. Oct 30 23:59:42.171081 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 23:59:42.175615 systemd-logind[1480]: Removed session 15. Oct 30 23:59:42.327909 systemd[1]: Started sshd@21-91.99.146.238:22-139.178.89.65:48052.service - OpenSSH per-connection server daemon (139.178.89.65:48052). Oct 30 23:59:43.273024 sshd[4214]: Accepted publickey for core from 139.178.89.65 port 48052 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:43.276669 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:43.286338 systemd-logind[1480]: New session 16 of user core. Oct 30 23:59:43.294873 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 23:59:44.643681 sshd[4216]: Connection closed by 139.178.89.65 port 48052 Oct 30 23:59:44.644756 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:44.649778 systemd[1]: sshd@21-91.99.146.238:22-139.178.89.65:48052.service: Deactivated successfully. Oct 30 23:59:44.653010 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 23:59:44.654653 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Oct 30 23:59:44.656364 systemd-logind[1480]: Removed session 16. Oct 30 23:59:44.814869 systemd[1]: Started sshd@22-91.99.146.238:22-139.178.89.65:48066.service - OpenSSH per-connection server daemon (139.178.89.65:48066). Oct 30 23:59:45.775904 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 48066 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:45.778728 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:45.784781 systemd-logind[1480]: New session 17 of user core. Oct 30 23:59:45.791820 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 23:59:46.640741 sshd[4237]: Connection closed by 139.178.89.65 port 48066 Oct 30 23:59:46.641375 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:46.646760 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Oct 30 23:59:46.647783 systemd[1]: sshd@22-91.99.146.238:22-139.178.89.65:48066.service: Deactivated successfully. Oct 30 23:59:46.650431 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 23:59:46.652497 systemd-logind[1480]: Removed session 17. Oct 30 23:59:46.813776 systemd[1]: Started sshd@23-91.99.146.238:22-139.178.89.65:37242.service - OpenSSH per-connection server daemon (139.178.89.65:37242). Oct 30 23:59:47.777054 sshd[4247]: Accepted publickey for core from 139.178.89.65 port 37242 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:47.778965 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:47.784524 systemd-logind[1480]: New session 18 of user core. Oct 30 23:59:47.789759 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 23:59:48.515526 sshd[4249]: Connection closed by 139.178.89.65 port 37242 Oct 30 23:59:48.516326 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:48.521557 systemd[1]: sshd@23-91.99.146.238:22-139.178.89.65:37242.service: Deactivated successfully. Oct 30 23:59:48.525652 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 23:59:48.527844 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Oct 30 23:59:48.529172 systemd-logind[1480]: Removed session 18. Oct 30 23:59:53.687337 systemd[1]: Started sshd@24-91.99.146.238:22-139.178.89.65:37258.service - OpenSSH per-connection server daemon (139.178.89.65:37258). Oct 30 23:59:54.631624 sshd[4266]: Accepted publickey for core from 139.178.89.65 port 37258 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 30 23:59:54.633982 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:54.640179 systemd-logind[1480]: New session 19 of user core. Oct 30 23:59:54.643682 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 23:59:55.354120 sshd[4268]: Connection closed by 139.178.89.65 port 37258 Oct 30 23:59:55.354762 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:55.363138 systemd[1]: sshd@24-91.99.146.238:22-139.178.89.65:37258.service: Deactivated successfully. Oct 30 23:59:55.363476 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Oct 30 23:59:55.366080 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 23:59:55.368680 systemd-logind[1480]: Removed session 19. Oct 31 00:00:00.528899 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Oct 31 00:00:00.531782 systemd[1]: Started sshd@25-91.99.146.238:22-139.178.89.65:37964.service - OpenSSH per-connection server daemon (139.178.89.65:37964). Oct 31 00:00:00.537132 systemd[1]: logrotate.service: Deactivated successfully. Oct 31 00:00:01.500879 sshd[4281]: Accepted publickey for core from 139.178.89.65 port 37964 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 31 00:00:01.502551 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:01.511469 systemd-logind[1480]: New session 20 of user core. Oct 31 00:00:01.515672 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:00:02.250068 sshd[4284]: Connection closed by 139.178.89.65 port 37964 Oct 31 00:00:02.251616 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:02.257731 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:00:02.259047 systemd[1]: sshd@25-91.99.146.238:22-139.178.89.65:37964.service: Deactivated successfully. Oct 31 00:00:02.263205 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:00:02.266359 systemd-logind[1480]: Removed session 20. Oct 31 00:00:02.421191 systemd[1]: Started sshd@26-91.99.146.238:22-139.178.89.65:37978.service - OpenSSH per-connection server daemon (139.178.89.65:37978). Oct 31 00:00:03.365432 sshd[4296]: Accepted publickey for core from 139.178.89.65 port 37978 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 31 00:00:03.367387 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:03.376030 systemd-logind[1480]: New session 21 of user core. Oct 31 00:00:03.379263 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:00:06.138325 kubelet[2685]: I1031 00:00:06.138239 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gdzrp" podStartSLOduration=198.138221612 podStartE2EDuration="3m18.138221612s" podCreationTimestamp="2025-10-30 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:57:09.143874945 +0000 UTC m=+25.443667921" watchObservedRunningTime="2025-10-31 00:00:06.138221612 +0000 UTC m=+202.438014548" Oct 31 00:00:06.169299 containerd[1497]: time="2025-10-31T00:00:06.168611670Z" level=info msg="StopContainer for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" with timeout 30 (s)" Oct 31 00:00:06.175091 containerd[1497]: time="2025-10-31T00:00:06.174694577Z" level=info msg="Stop container \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" with signal terminated" Oct 31 00:00:06.191680 systemd[1]: cri-containerd-92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609.scope: Deactivated successfully. Oct 31 00:00:06.196109 containerd[1497]: time="2025-10-31T00:00:06.195934854Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:00:06.206160 containerd[1497]: time="2025-10-31T00:00:06.206104367Z" level=info msg="StopContainer for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" with timeout 2 (s)" Oct 31 00:00:06.206556 containerd[1497]: time="2025-10-31T00:00:06.206535372Z" level=info msg="Stop container \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" with signal terminated" Oct 31 00:00:06.219718 systemd-networkd[1396]: lxc_health: Link DOWN Oct 31 00:00:06.219727 systemd-networkd[1396]: lxc_health: Lost carrier Oct 31 00:00:06.234153 systemd[1]: cri-containerd-3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2.scope: Deactivated successfully. Oct 31 00:00:06.235315 systemd[1]: cri-containerd-3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2.scope: Consumed 9.251s CPU time, 124.9M memory peak, 136K read from disk, 12.9M written to disk. Oct 31 00:00:06.251153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609-rootfs.mount: Deactivated successfully. Oct 31 00:00:06.262970 containerd[1497]: time="2025-10-31T00:00:06.262732037Z" level=info msg="shim disconnected" id=92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609 namespace=k8s.io Oct 31 00:00:06.262970 containerd[1497]: time="2025-10-31T00:00:06.262935439Z" level=warning msg="cleaning up after shim disconnected" id=92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609 namespace=k8s.io Oct 31 00:00:06.262970 containerd[1497]: time="2025-10-31T00:00:06.262945879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:06.279340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2-rootfs.mount: Deactivated successfully. Oct 31 00:00:06.288329 containerd[1497]: time="2025-10-31T00:00:06.288216960Z" level=info msg="shim disconnected" id=3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2 namespace=k8s.io Oct 31 00:00:06.288798 containerd[1497]: time="2025-10-31T00:00:06.288270441Z" level=warning msg="cleaning up after shim disconnected" id=3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2 namespace=k8s.io Oct 31 00:00:06.288798 containerd[1497]: time="2025-10-31T00:00:06.288711726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:06.292629 containerd[1497]: time="2025-10-31T00:00:06.292567649Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:00:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 31 00:00:06.297207 containerd[1497]: time="2025-10-31T00:00:06.296914737Z" level=info msg="StopContainer for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" returns successfully" Oct 31 00:00:06.300488 containerd[1497]: time="2025-10-31T00:00:06.297758346Z" level=info msg="StopPodSandbox for \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\"" Oct 31 00:00:06.300488 containerd[1497]: time="2025-10-31T00:00:06.297826507Z" level=info msg="Container to stop \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:00:06.302977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e-shm.mount: Deactivated successfully. Oct 31 00:00:06.312855 systemd[1]: cri-containerd-35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e.scope: Deactivated successfully. Oct 31 00:00:06.317230 containerd[1497]: time="2025-10-31T00:00:06.317081961Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:00:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 31 00:00:06.326405 containerd[1497]: time="2025-10-31T00:00:06.326348344Z" level=info msg="StopContainer for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" returns successfully" Oct 31 00:00:06.328240 containerd[1497]: time="2025-10-31T00:00:06.327978003Z" level=info msg="StopPodSandbox for \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\"" Oct 31 00:00:06.328240 containerd[1497]: time="2025-10-31T00:00:06.328030723Z" level=info msg="Container to stop \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:00:06.328240 containerd[1497]: time="2025-10-31T00:00:06.328041763Z" level=info msg="Container to stop \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:00:06.328240 containerd[1497]: time="2025-10-31T00:00:06.328050643Z" level=info msg="Container to stop \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:00:06.328931 containerd[1497]: time="2025-10-31T00:00:06.328519609Z" level=info msg="Container to stop \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:00:06.329276 containerd[1497]: time="2025-10-31T00:00:06.328573849Z" level=info msg="Container to stop \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:00:06.336678 systemd[1]: cri-containerd-3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9.scope: Deactivated successfully. Oct 31 00:00:06.357481 containerd[1497]: time="2025-10-31T00:00:06.356398119Z" level=info msg="shim disconnected" id=35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e namespace=k8s.io Oct 31 00:00:06.357481 containerd[1497]: time="2025-10-31T00:00:06.356499480Z" level=warning msg="cleaning up after shim disconnected" id=35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e namespace=k8s.io Oct 31 00:00:06.357481 containerd[1497]: time="2025-10-31T00:00:06.356510640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:06.374544 containerd[1497]: time="2025-10-31T00:00:06.374429479Z" level=info msg="TearDown network for sandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" successfully" Oct 31 00:00:06.374544 containerd[1497]: time="2025-10-31T00:00:06.374532000Z" level=info msg="StopPodSandbox for \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" returns successfully" Oct 31 00:00:06.384354 containerd[1497]: time="2025-10-31T00:00:06.383954545Z" level=info msg="shim disconnected" id=3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9 namespace=k8s.io Oct 31 00:00:06.386362 containerd[1497]: time="2025-10-31T00:00:06.385990728Z" level=warning msg="cleaning up after shim disconnected" id=3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9 namespace=k8s.io Oct 31 00:00:06.386362 containerd[1497]: time="2025-10-31T00:00:06.386034728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:06.406101 containerd[1497]: time="2025-10-31T00:00:06.405929390Z" level=info msg="TearDown network for sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" successfully" Oct 31 00:00:06.406101 containerd[1497]: time="2025-10-31T00:00:06.405992230Z" level=info msg="StopPodSandbox for \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" returns successfully" Oct 31 00:00:06.496287 kubelet[2685]: I1031 00:00:06.495772 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjp4t\" (UniqueName: \"kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-kube-api-access-hjp4t\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496287 kubelet[2685]: I1031 00:00:06.495821 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-etc-cni-netd\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496287 kubelet[2685]: I1031 00:00:06.495841 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-hostproc\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496287 kubelet[2685]: I1031 00:00:06.495860 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-cilium-config-path\") pod \"408159b7-4f70-4ef6-9b26-9e1565e3a2ea\" (UID: \"408159b7-4f70-4ef6-9b26-9e1565e3a2ea\") " Oct 31 00:00:06.496287 kubelet[2685]: I1031 00:00:06.495877 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrb4w\" (UniqueName: \"kubernetes.io/projected/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-kube-api-access-nrb4w\") pod \"408159b7-4f70-4ef6-9b26-9e1565e3a2ea\" (UID: \"408159b7-4f70-4ef6-9b26-9e1565e3a2ea\") " Oct 31 00:00:06.496287 kubelet[2685]: I1031 00:00:06.495897 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-kernel\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496590 kubelet[2685]: I1031 00:00:06.495912 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-hubble-tls\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496590 kubelet[2685]: I1031 00:00:06.495933 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-config-path\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496590 kubelet[2685]: I1031 00:00:06.495948 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-xtables-lock\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496590 kubelet[2685]: I1031 00:00:06.495962 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-net\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496590 kubelet[2685]: I1031 00:00:06.495977 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cni-path\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496590 kubelet[2685]: I1031 00:00:06.495996 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-lib-modules\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496717 kubelet[2685]: I1031 00:00:06.496013 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-run\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496717 kubelet[2685]: I1031 00:00:06.496034 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edd46b38-8dc5-483c-8162-68a7efe678ec-clustermesh-secrets\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496717 kubelet[2685]: I1031 00:00:06.496052 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-bpf-maps\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496717 kubelet[2685]: I1031 00:00:06.496067 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-cgroup\") pod \"edd46b38-8dc5-483c-8162-68a7efe678ec\" (UID: \"edd46b38-8dc5-483c-8162-68a7efe678ec\") " Oct 31 00:00:06.496717 kubelet[2685]: I1031 00:00:06.496128 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.496717 kubelet[2685]: I1031 00:00:06.496166 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.496904 kubelet[2685]: I1031 00:00:06.496181 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-hostproc" (OuterVolumeSpecName: "hostproc") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.498837 kubelet[2685]: I1031 00:00:06.498508 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.498976 kubelet[2685]: I1031 00:00:06.498902 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.500262 kubelet[2685]: I1031 00:00:06.499808 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cni-path" (OuterVolumeSpecName: "cni-path") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.500262 kubelet[2685]: I1031 00:00:06.499853 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.500262 kubelet[2685]: I1031 00:00:06.499869 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.502795 kubelet[2685]: I1031 00:00:06.502726 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.507362 kubelet[2685]: I1031 00:00:06.505895 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:00:06.507362 kubelet[2685]: I1031 00:00:06.507168 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "408159b7-4f70-4ef6-9b26-9e1565e3a2ea" (UID: "408159b7-4f70-4ef6-9b26-9e1565e3a2ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:00:06.507362 kubelet[2685]: I1031 00:00:06.507314 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-kube-api-access-hjp4t" (OuterVolumeSpecName: "kube-api-access-hjp4t") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "kube-api-access-hjp4t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:00:06.508100 kubelet[2685]: I1031 00:00:06.508063 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:00:06.510573 kubelet[2685]: I1031 00:00:06.510527 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:00:06.512235 kubelet[2685]: I1031 00:00:06.512163 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-kube-api-access-nrb4w" (OuterVolumeSpecName: "kube-api-access-nrb4w") pod "408159b7-4f70-4ef6-9b26-9e1565e3a2ea" (UID: "408159b7-4f70-4ef6-9b26-9e1565e3a2ea"). InnerVolumeSpecName "kube-api-access-nrb4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:00:06.512914 kubelet[2685]: I1031 00:00:06.512848 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd46b38-8dc5-483c-8162-68a7efe678ec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "edd46b38-8dc5-483c-8162-68a7efe678ec" (UID: "edd46b38-8dc5-483c-8162-68a7efe678ec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597088 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-config-path\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597150 2685 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-xtables-lock\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597168 2685 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-net\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597183 2685 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edd46b38-8dc5-483c-8162-68a7efe678ec-clustermesh-secrets\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597200 2685 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cni-path\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597215 2685 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-lib-modules\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597262 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-run\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597439 kubelet[2685]: I1031 00:00:06.597278 2685 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-bpf-maps\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597293 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-cilium-cgroup\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597308 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hjp4t\" (UniqueName: \"kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-kube-api-access-hjp4t\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597322 2685 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-etc-cni-netd\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597337 2685 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-host-proc-sys-kernel\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597353 2685 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edd46b38-8dc5-483c-8162-68a7efe678ec-hubble-tls\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597368 2685 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edd46b38-8dc5-483c-8162-68a7efe678ec-hostproc\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597382 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-cilium-config-path\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.597993 kubelet[2685]: I1031 00:00:06.597398 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrb4w\" (UniqueName: \"kubernetes.io/projected/408159b7-4f70-4ef6-9b26-9e1565e3a2ea-kube-api-access-nrb4w\") on node \"ci-4230-2-4-n-ab7d00e960\" DevicePath \"\"" Oct 31 00:00:06.612477 kubelet[2685]: I1031 00:00:06.610444 2685 scope.go:117] "RemoveContainer" containerID="92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609" Oct 31 00:00:06.616800 containerd[1497]: time="2025-10-31T00:00:06.616657774Z" level=info msg="RemoveContainer for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\"" Oct 31 00:00:06.618434 systemd[1]: Removed slice kubepods-besteffort-pod408159b7_4f70_4ef6_9b26_9e1565e3a2ea.slice - libcontainer container kubepods-besteffort-pod408159b7_4f70_4ef6_9b26_9e1565e3a2ea.slice. Oct 31 00:00:06.629187 containerd[1497]: time="2025-10-31T00:00:06.629135113Z" level=info msg="RemoveContainer for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" returns successfully" Oct 31 00:00:06.629748 kubelet[2685]: I1031 00:00:06.629697 2685 scope.go:117] "RemoveContainer" containerID="92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609" Oct 31 00:00:06.630136 containerd[1497]: time="2025-10-31T00:00:06.630084803Z" level=error msg="ContainerStatus for \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\": not found" Oct 31 00:00:06.631564 kubelet[2685]: E1031 00:00:06.631226 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\": not found" containerID="92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609" Oct 31 00:00:06.632361 kubelet[2685]: I1031 00:00:06.631722 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609"} err="failed to get container status \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\": rpc error: code = NotFound desc = an error occurred when try to find container \"92c244b1c1f51e9e87b77276810f2c9222125ac98aa46e3e543ac08dc24f4609\": not found" Oct 31 00:00:06.632361 kubelet[2685]: I1031 00:00:06.632130 2685 scope.go:117] "RemoveContainer" containerID="3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2" Oct 31 00:00:06.636534 containerd[1497]: time="2025-10-31T00:00:06.635331582Z" level=info msg="RemoveContainer for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\"" Oct 31 00:00:06.637690 systemd[1]: Removed slice kubepods-burstable-podedd46b38_8dc5_483c_8162_68a7efe678ec.slice - libcontainer container kubepods-burstable-podedd46b38_8dc5_483c_8162_68a7efe678ec.slice. Oct 31 00:00:06.637822 systemd[1]: kubepods-burstable-podedd46b38_8dc5_483c_8162_68a7efe678ec.slice: Consumed 9.370s CPU time, 125.3M memory peak, 136K read from disk, 12.9M written to disk. Oct 31 00:00:06.644965 containerd[1497]: time="2025-10-31T00:00:06.644551964Z" level=info msg="RemoveContainer for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" returns successfully" Oct 31 00:00:06.645373 kubelet[2685]: I1031 00:00:06.645350 2685 scope.go:117] "RemoveContainer" containerID="a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb" Oct 31 00:00:06.654820 containerd[1497]: time="2025-10-31T00:00:06.653834748Z" level=info msg="RemoveContainer for \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\"" Oct 31 00:00:06.665021 containerd[1497]: time="2025-10-31T00:00:06.660953267Z" level=info msg="RemoveContainer for \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\" returns successfully" Oct 31 00:00:06.667186 kubelet[2685]: I1031 00:00:06.667147 2685 scope.go:117] "RemoveContainer" containerID="dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34" Oct 31 00:00:06.671492 containerd[1497]: time="2025-10-31T00:00:06.671382743Z" level=info msg="RemoveContainer for \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\"" Oct 31 00:00:06.677462 containerd[1497]: time="2025-10-31T00:00:06.677280728Z" level=info msg="RemoveContainer for \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\" returns successfully" Oct 31 00:00:06.678728 kubelet[2685]: I1031 00:00:06.678699 2685 scope.go:117] "RemoveContainer" containerID="4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2" Oct 31 00:00:06.680494 containerd[1497]: time="2025-10-31T00:00:06.680438643Z" level=info msg="RemoveContainer for \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\"" Oct 31 00:00:06.688657 containerd[1497]: time="2025-10-31T00:00:06.688613094Z" level=info msg="RemoveContainer for \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\" returns successfully" Oct 31 00:00:06.689077 kubelet[2685]: I1031 00:00:06.688978 2685 scope.go:117] "RemoveContainer" containerID="c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3" Oct 31 00:00:06.695837 containerd[1497]: time="2025-10-31T00:00:06.695774174Z" level=info msg="RemoveContainer for \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\"" Oct 31 00:00:06.701152 containerd[1497]: time="2025-10-31T00:00:06.701092593Z" level=info msg="RemoveContainer for \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\" returns successfully" Oct 31 00:00:06.702082 kubelet[2685]: I1031 00:00:06.701741 2685 scope.go:117] "RemoveContainer" containerID="3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2" Oct 31 00:00:06.702383 containerd[1497]: time="2025-10-31T00:00:06.702318007Z" level=error msg="ContainerStatus for \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\": not found" Oct 31 00:00:06.702714 kubelet[2685]: E1031 00:00:06.702681 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\": not found" containerID="3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2" Oct 31 00:00:06.702810 kubelet[2685]: I1031 00:00:06.702727 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2"} err="failed to get container status \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ba2af3d8611de4a5604f48de32281f7baf74b9fe943b61d1d6c073c8bc87ed2\": not found" Oct 31 00:00:06.702810 kubelet[2685]: I1031 00:00:06.702752 2685 scope.go:117] "RemoveContainer" containerID="a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb" Oct 31 00:00:06.703226 containerd[1497]: time="2025-10-31T00:00:06.703186577Z" level=error msg="ContainerStatus for \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\": not found" Oct 31 00:00:06.703423 kubelet[2685]: E1031 00:00:06.703393 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\": not found" containerID="a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb" Oct 31 00:00:06.703525 kubelet[2685]: I1031 00:00:06.703431 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb"} err="failed to get container status \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9c690bc7f1f0f1613753c923f0d089beb11971c95b40d6a22f92b7037d66ecb\": not found" Oct 31 00:00:06.703525 kubelet[2685]: I1031 00:00:06.703482 2685 scope.go:117] "RemoveContainer" containerID="dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34" Oct 31 00:00:06.703749 containerd[1497]: time="2025-10-31T00:00:06.703714302Z" level=error msg="ContainerStatus for \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\": not found" Oct 31 00:00:06.704041 kubelet[2685]: E1031 00:00:06.703898 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\": not found" containerID="dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34" Oct 31 00:00:06.704041 kubelet[2685]: I1031 00:00:06.703932 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34"} err="failed to get container status \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc8774a63ec812e8336d34fa63735ebed290ea430d0a5d8f0d43c2e0d3dc6c34\": not found" Oct 31 00:00:06.704041 kubelet[2685]: I1031 00:00:06.703954 2685 scope.go:117] "RemoveContainer" containerID="4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2" Oct 31 00:00:06.704144 containerd[1497]: time="2025-10-31T00:00:06.704118147Z" level=error msg="ContainerStatus for \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\": not found" Oct 31 00:00:06.704445 kubelet[2685]: E1031 00:00:06.704316 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\": not found" containerID="4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2" Oct 31 00:00:06.704445 kubelet[2685]: I1031 00:00:06.704342 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2"} err="failed to get container status \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d7b55590408901de19b4a439fda5ed6e870eece4eee762a0c3e761e870b2cc2\": not found" Oct 31 00:00:06.704445 kubelet[2685]: I1031 00:00:06.704358 2685 scope.go:117] "RemoveContainer" containerID="c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3" Oct 31 00:00:06.704613 containerd[1497]: time="2025-10-31T00:00:06.704580152Z" level=error msg="ContainerStatus for \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\": not found" Oct 31 00:00:06.704869 kubelet[2685]: E1031 00:00:06.704720 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\": not found" containerID="c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3" Oct 31 00:00:06.704869 kubelet[2685]: I1031 00:00:06.704748 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3"} err="failed to get container status \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5fc189db80a9abfaa2e6854d15abd452fbf582b502c63c9a8552030337c4cd3\": not found" Oct 31 00:00:07.168272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e-rootfs.mount: Deactivated successfully. Oct 31 00:00:07.168785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9-rootfs.mount: Deactivated successfully. Oct 31 00:00:07.169114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9-shm.mount: Deactivated successfully. Oct 31 00:00:07.169400 systemd[1]: var-lib-kubelet-pods-408159b7\x2d4f70\x2d4ef6\x2d9b26\x2d9e1565e3a2ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrb4w.mount: Deactivated successfully. Oct 31 00:00:07.169573 systemd[1]: var-lib-kubelet-pods-edd46b38\x2d8dc5\x2d483c\x2d8162\x2d68a7efe678ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjp4t.mount: Deactivated successfully. Oct 31 00:00:07.169691 systemd[1]: var-lib-kubelet-pods-edd46b38\x2d8dc5\x2d483c\x2d8162\x2d68a7efe678ec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 31 00:00:07.169880 systemd[1]: var-lib-kubelet-pods-edd46b38\x2d8dc5\x2d483c\x2d8162\x2d68a7efe678ec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 31 00:00:07.859377 kubelet[2685]: I1031 00:00:07.859295 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408159b7-4f70-4ef6-9b26-9e1565e3a2ea" path="/var/lib/kubelet/pods/408159b7-4f70-4ef6-9b26-9e1565e3a2ea/volumes" Oct 31 00:00:07.860206 kubelet[2685]: I1031 00:00:07.860136 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd46b38-8dc5-483c-8162-68a7efe678ec" path="/var/lib/kubelet/pods/edd46b38-8dc5-483c-8162-68a7efe678ec/volumes" Oct 31 00:00:08.213151 sshd[4298]: Connection closed by 139.178.89.65 port 37978 Oct 31 00:00:08.214085 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:08.219432 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:00:08.219903 systemd[1]: sshd@26-91.99.146.238:22-139.178.89.65:37978.service: Deactivated successfully. Oct 31 00:00:08.224009 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:00:08.224274 systemd[1]: session-21.scope: Consumed 1.618s CPU time, 23.5M memory peak. Oct 31 00:00:08.225970 systemd-logind[1480]: Removed session 21. Oct 31 00:00:08.385872 systemd[1]: Started sshd@27-91.99.146.238:22-139.178.89.65:32926.service - OpenSSH per-connection server daemon (139.178.89.65:32926). Oct 31 00:00:09.041044 kubelet[2685]: E1031 00:00:09.040965 2685 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:00:09.353096 sshd[4460]: Accepted publickey for core from 139.178.89.65 port 32926 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 31 00:00:09.354989 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:09.362905 systemd-logind[1480]: New session 22 of user core. Oct 31 00:00:09.376719 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:00:10.917759 kubelet[2685]: I1031 00:00:10.917103 2685 memory_manager.go:355] "RemoveStaleState removing state" podUID="408159b7-4f70-4ef6-9b26-9e1565e3a2ea" containerName="cilium-operator" Oct 31 00:00:10.917759 kubelet[2685]: I1031 00:00:10.917137 2685 memory_manager.go:355] "RemoveStaleState removing state" podUID="edd46b38-8dc5-483c-8162-68a7efe678ec" containerName="cilium-agent" Oct 31 00:00:10.929343 systemd[1]: Created slice kubepods-burstable-podc8f0b66d_9b88_4c6f_b7d9_cbc7f7a1b095.slice - libcontainer container kubepods-burstable-podc8f0b66d_9b88_4c6f_b7d9_cbc7f7a1b095.slice. Oct 31 00:00:10.955191 kubelet[2685]: W1031 00:00:10.954963 2685 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-2-4-n-ab7d00e960" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object Oct 31 00:00:10.955191 kubelet[2685]: E1031 00:00:10.955016 2685 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-2-4-n-ab7d00e960\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object" logger="UnhandledError" Oct 31 00:00:10.955191 kubelet[2685]: W1031 00:00:10.955066 2685 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230-2-4-n-ab7d00e960" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object Oct 31 00:00:10.955191 kubelet[2685]: E1031 00:00:10.955078 2685 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-2-4-n-ab7d00e960\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object" logger="UnhandledError" Oct 31 00:00:10.955191 kubelet[2685]: W1031 00:00:10.955109 2685 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-2-4-n-ab7d00e960" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object Oct 31 00:00:10.955463 kubelet[2685]: E1031 00:00:10.955120 2685 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-2-4-n-ab7d00e960\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object" logger="UnhandledError" Oct 31 00:00:10.955463 kubelet[2685]: W1031 00:00:10.955156 2685 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-2-4-n-ab7d00e960" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object Oct 31 00:00:10.955463 kubelet[2685]: E1031 00:00:10.955165 2685 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230-2-4-n-ab7d00e960\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object" logger="UnhandledError" Oct 31 00:00:10.955652 kubelet[2685]: I1031 00:00:10.955599 2685 status_manager.go:890] "Failed to get status for pod" podUID="c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095" pod="kube-system/cilium-58p2n" err="pods \"cilium-58p2n\" is forbidden: User \"system:node:ci-4230-2-4-n-ab7d00e960\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-n-ab7d00e960' and this object" Oct 31 00:00:11.028155 kubelet[2685]: I1031 00:00:11.027700 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-host-proc-sys-net\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028155 kubelet[2685]: I1031 00:00:11.027761 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-xtables-lock\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028155 kubelet[2685]: I1031 00:00:11.027782 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-cilium-run\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028155 kubelet[2685]: I1031 00:00:11.027798 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-hostproc\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028155 kubelet[2685]: I1031 00:00:11.027813 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-cilium-cgroup\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028155 kubelet[2685]: I1031 00:00:11.027827 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-cni-path\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028587 kubelet[2685]: I1031 00:00:11.027844 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-bpf-maps\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028587 kubelet[2685]: I1031 00:00:11.027860 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-clustermesh-secrets\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028587 kubelet[2685]: I1031 00:00:11.027885 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-hubble-tls\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028587 kubelet[2685]: I1031 00:00:11.027903 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-lib-modules\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028587 kubelet[2685]: I1031 00:00:11.027926 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-cilium-ipsec-secrets\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028587 kubelet[2685]: I1031 00:00:11.027945 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-host-proc-sys-kernel\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028738 kubelet[2685]: I1031 00:00:11.027961 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-etc-cni-netd\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028738 kubelet[2685]: I1031 00:00:11.027978 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-cilium-config-path\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.028738 kubelet[2685]: I1031 00:00:11.027996 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xvw\" (UniqueName: \"kubernetes.io/projected/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-kube-api-access-96xvw\") pod \"cilium-58p2n\" (UID: \"c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095\") " pod="kube-system/cilium-58p2n" Oct 31 00:00:11.070765 sshd[4463]: Connection closed by 139.178.89.65 port 32926 Oct 31 00:00:11.071779 sshd-session[4460]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:11.077554 systemd[1]: sshd@27-91.99.146.238:22-139.178.89.65:32926.service: Deactivated successfully. Oct 31 00:00:11.080817 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:00:11.082258 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:00:11.084126 systemd-logind[1480]: Removed session 22. Oct 31 00:00:11.240949 systemd[1]: Started sshd@28-91.99.146.238:22-139.178.89.65:32940.service - OpenSSH per-connection server daemon (139.178.89.65:32940). Oct 31 00:00:12.130948 kubelet[2685]: E1031 00:00:12.130516 2685 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Oct 31 00:00:12.130948 kubelet[2685]: E1031 00:00:12.130552 2685 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-58p2n: failed to sync secret cache: timed out waiting for the condition Oct 31 00:00:12.130948 kubelet[2685]: E1031 00:00:12.130634 2685 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-hubble-tls podName:c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095 nodeName:}" failed. No retries permitted until 2025-10-31 00:00:12.630607425 +0000 UTC m=+208.930400321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095-hubble-tls") pod "cilium-58p2n" (UID: "c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095") : failed to sync secret cache: timed out waiting for the condition Oct 31 00:00:12.181748 sshd[4476]: Accepted publickey for core from 139.178.89.65 port 32940 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 31 00:00:12.184167 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:12.191421 systemd-logind[1480]: New session 23 of user core. Oct 31 00:00:12.198810 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:00:12.739804 containerd[1497]: time="2025-10-31T00:00:12.739068866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58p2n,Uid:c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095,Namespace:kube-system,Attempt:0,}" Oct 31 00:00:12.785490 containerd[1497]: time="2025-10-31T00:00:12.784370875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:00:12.785490 containerd[1497]: time="2025-10-31T00:00:12.784471876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:00:12.785490 containerd[1497]: time="2025-10-31T00:00:12.784487356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:00:12.785490 containerd[1497]: time="2025-10-31T00:00:12.784580557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:00:12.817082 systemd[1]: Started cri-containerd-c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5.scope - libcontainer container c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5. Oct 31 00:00:12.827669 sshd[4480]: Connection closed by 139.178.89.65 port 32940 Oct 31 00:00:12.828434 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:12.832635 systemd[1]: sshd@28-91.99.146.238:22-139.178.89.65:32940.service: Deactivated successfully. Oct 31 00:00:12.836380 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:00:12.842697 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:00:12.845271 systemd-logind[1480]: Removed session 23. Oct 31 00:00:12.850671 containerd[1497]: time="2025-10-31T00:00:12.850512345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58p2n,Uid:c8f0b66d-9b88-4c6f-b7d9-cbc7f7a1b095,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\"" Oct 31 00:00:12.855264 containerd[1497]: time="2025-10-31T00:00:12.855122803Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:00:12.868925 containerd[1497]: time="2025-10-31T00:00:12.868782015Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d\"" Oct 31 00:00:12.872190 containerd[1497]: time="2025-10-31T00:00:12.870648798Z" level=info msg="StartContainer for \"47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d\"" Oct 31 00:00:12.902806 systemd[1]: Started cri-containerd-47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d.scope - libcontainer container 47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d. Oct 31 00:00:12.934503 containerd[1497]: time="2025-10-31T00:00:12.934404599Z" level=info msg="StartContainer for \"47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d\" returns successfully" Oct 31 00:00:12.947911 systemd[1]: cri-containerd-47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d.scope: Deactivated successfully. Oct 31 00:00:12.993920 containerd[1497]: time="2025-10-31T00:00:12.993663463Z" level=info msg="shim disconnected" id=47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d namespace=k8s.io Oct 31 00:00:12.993920 containerd[1497]: time="2025-10-31T00:00:12.993822025Z" level=warning msg="cleaning up after shim disconnected" id=47ed3525674c2002f43bacdd8577c8aa546ff1b0988ced2da82819072123843d namespace=k8s.io Oct 31 00:00:12.993920 containerd[1497]: time="2025-10-31T00:00:12.993832865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:12.996627 systemd[1]: Started sshd@29-91.99.146.238:22-139.178.89.65:32946.service - OpenSSH per-connection server daemon (139.178.89.65:32946). Oct 31 00:00:13.009843 containerd[1497]: time="2025-10-31T00:00:13.009782667Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:00:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 31 00:00:13.675663 containerd[1497]: time="2025-10-31T00:00:13.675434816Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 31 00:00:13.695636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668291375.mount: Deactivated successfully. Oct 31 00:00:13.709616 containerd[1497]: time="2025-10-31T00:00:13.707900471Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154\"" Oct 31 00:00:13.711183 containerd[1497]: time="2025-10-31T00:00:13.710980231Z" level=info msg="StartContainer for \"8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154\"" Oct 31 00:00:13.750704 systemd[1]: Started cri-containerd-8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154.scope - libcontainer container 8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154. Oct 31 00:00:13.785213 containerd[1497]: time="2025-10-31T00:00:13.785077498Z" level=info msg="StartContainer for \"8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154\" returns successfully" Oct 31 00:00:13.794538 systemd[1]: cri-containerd-8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154.scope: Deactivated successfully. Oct 31 00:00:13.826627 containerd[1497]: time="2025-10-31T00:00:13.826382746Z" level=info msg="shim disconnected" id=8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154 namespace=k8s.io Oct 31 00:00:13.826627 containerd[1497]: time="2025-10-31T00:00:13.826440546Z" level=warning msg="cleaning up after shim disconnected" id=8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154 namespace=k8s.io Oct 31 00:00:13.826627 containerd[1497]: time="2025-10-31T00:00:13.826464427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:13.934267 sshd[4578]: Accepted publickey for core from 139.178.89.65 port 32946 ssh2: RSA SHA256:Oiivr4FYZNoFRNArKDf1mcLLlGhqoYWE2cfJZWdI7tQ Oct 31 00:00:13.937192 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:13.946198 systemd-logind[1480]: New session 24 of user core. Oct 31 00:00:13.951691 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:00:14.042990 kubelet[2685]: E1031 00:00:14.042619 2685 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:00:14.149185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bacb5b8dea336bc0aad57dd9a36752537f4374e03a9e86935d43c3959c2d154-rootfs.mount: Deactivated successfully. Oct 31 00:00:14.682552 containerd[1497]: time="2025-10-31T00:00:14.681334826Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 31 00:00:14.714554 containerd[1497]: time="2025-10-31T00:00:14.714506257Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c\"" Oct 31 00:00:14.717170 containerd[1497]: time="2025-10-31T00:00:14.715326308Z" level=info msg="StartContainer for \"0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c\"" Oct 31 00:00:14.755774 systemd[1]: Started cri-containerd-0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c.scope - libcontainer container 0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c. Oct 31 00:00:14.790388 containerd[1497]: time="2025-10-31T00:00:14.789873557Z" level=info msg="StartContainer for \"0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c\" returns successfully" Oct 31 00:00:14.793499 systemd[1]: cri-containerd-0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c.scope: Deactivated successfully. Oct 31 00:00:14.827725 containerd[1497]: time="2025-10-31T00:00:14.827649449Z" level=info msg="shim disconnected" id=0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c namespace=k8s.io Oct 31 00:00:14.828429 containerd[1497]: time="2025-10-31T00:00:14.828214496Z" level=warning msg="cleaning up after shim disconnected" id=0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c namespace=k8s.io Oct 31 00:00:14.828429 containerd[1497]: time="2025-10-31T00:00:14.828240576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:15.148899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d0402b0ac5e9224835fd99cba3fc84d790ac255ed7db64e2446d23232d5148c-rootfs.mount: Deactivated successfully. Oct 31 00:00:15.695362 containerd[1497]: time="2025-10-31T00:00:15.695205923Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 31 00:00:15.742965 containerd[1497]: time="2025-10-31T00:00:15.742803673Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7\"" Oct 31 00:00:15.744054 containerd[1497]: time="2025-10-31T00:00:15.744011689Z" level=info msg="StartContainer for \"1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7\"" Oct 31 00:00:15.786877 systemd[1]: Started cri-containerd-1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7.scope - libcontainer container 1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7. Oct 31 00:00:15.822217 systemd[1]: cri-containerd-1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7.scope: Deactivated successfully. Oct 31 00:00:15.825595 containerd[1497]: time="2025-10-31T00:00:15.825238843Z" level=info msg="StartContainer for \"1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7\" returns successfully" Oct 31 00:00:15.850366 containerd[1497]: time="2025-10-31T00:00:15.850301094Z" level=info msg="shim disconnected" id=1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7 namespace=k8s.io Oct 31 00:00:15.850726 containerd[1497]: time="2025-10-31T00:00:15.850482057Z" level=warning msg="cleaning up after shim disconnected" id=1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7 namespace=k8s.io Oct 31 00:00:15.850726 containerd[1497]: time="2025-10-31T00:00:15.850495537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:16.148435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d100e9b9532a12131cd5651c612f3cf64a0a9c6e3abc8a39a6d7ae5608821d7-rootfs.mount: Deactivated successfully. Oct 31 00:00:16.695813 containerd[1497]: time="2025-10-31T00:00:16.695759064Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 31 00:00:16.728486 containerd[1497]: time="2025-10-31T00:00:16.728408703Z" level=info msg="CreateContainer within sandbox \"c3b8f319f67269ac0892bde81828d4bccbc091df2064030b489f615166b54af5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aac1284fdceec97840ec72094f391b79f4cb57eb5a748548a9afdb31c127c94b\"" Oct 31 00:00:16.731931 containerd[1497]: time="2025-10-31T00:00:16.730863656Z" level=info msg="StartContainer for \"aac1284fdceec97840ec72094f391b79f4cb57eb5a748548a9afdb31c127c94b\"" Oct 31 00:00:16.773691 systemd[1]: Started cri-containerd-aac1284fdceec97840ec72094f391b79f4cb57eb5a748548a9afdb31c127c94b.scope - libcontainer container aac1284fdceec97840ec72094f391b79f4cb57eb5a748548a9afdb31c127c94b. Oct 31 00:00:16.817958 containerd[1497]: time="2025-10-31T00:00:16.817902946Z" level=info msg="StartContainer for \"aac1284fdceec97840ec72094f391b79f4cb57eb5a748548a9afdb31c127c94b\" returns successfully" Oct 31 00:00:17.136512 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 31 00:00:17.149716 systemd[1]: run-containerd-runc-k8s.io-aac1284fdceec97840ec72094f391b79f4cb57eb5a748548a9afdb31c127c94b-runc.bpuCSB.mount: Deactivated successfully. Oct 31 00:00:18.054966 kubelet[2685]: I1031 00:00:18.054770 2685 setters.go:602] "Node became not ready" node="ci-4230-2-4-n-ab7d00e960" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-31T00:00:18Z","lastTransitionTime":"2025-10-31T00:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 31 00:00:20.278656 systemd-networkd[1396]: lxc_health: Link UP Oct 31 00:00:20.290737 systemd-networkd[1396]: lxc_health: Gained carrier Oct 31 00:00:20.810489 kubelet[2685]: I1031 00:00:20.808897 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-58p2n" podStartSLOduration=10.808878952 podStartE2EDuration="10.808878952s" podCreationTimestamp="2025-10-31 00:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:00:17.721840887 +0000 UTC m=+214.021633823" watchObservedRunningTime="2025-10-31 00:00:20.808878952 +0000 UTC m=+217.108671888" Oct 31 00:00:21.522174 systemd-networkd[1396]: lxc_health: Gained IPv6LL Oct 31 00:00:25.504892 systemd[1]: Started sshd@30-91.99.146.238:22-103.165.139.150:37138.service - OpenSSH per-connection server daemon (103.165.139.150:37138). Oct 31 00:00:25.509764 sshd[4653]: Connection closed by 139.178.89.65 port 32946 Oct 31 00:00:25.510292 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:25.515414 systemd[1]: sshd@29-91.99.146.238:22-139.178.89.65:32946.service: Deactivated successfully. Oct 31 00:00:25.520683 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:00:25.522946 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:00:25.524750 systemd-logind[1480]: Removed session 24. Oct 31 00:00:26.447775 sshd[5423]: Invalid user space from 103.165.139.150 port 37138 Oct 31 00:00:26.627678 sshd[5423]: Received disconnect from 103.165.139.150 port 37138:11: Bye Bye [preauth] Oct 31 00:00:26.627678 sshd[5423]: Disconnected from invalid user space 103.165.139.150 port 37138 [preauth] Oct 31 00:00:26.631368 systemd[1]: sshd@30-91.99.146.238:22-103.165.139.150:37138.service: Deactivated successfully. Oct 31 00:00:43.888999 containerd[1497]: time="2025-10-31T00:00:43.888931405Z" level=info msg="StopPodSandbox for \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\"" Oct 31 00:00:43.889423 containerd[1497]: time="2025-10-31T00:00:43.889095568Z" level=info msg="TearDown network for sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" successfully" Oct 31 00:00:43.889423 containerd[1497]: time="2025-10-31T00:00:43.889118288Z" level=info msg="StopPodSandbox for \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" returns successfully" Oct 31 00:00:43.891867 containerd[1497]: time="2025-10-31T00:00:43.891730536Z" level=info msg="RemovePodSandbox for \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\"" Oct 31 00:00:43.891867 containerd[1497]: time="2025-10-31T00:00:43.891806937Z" level=info msg="Forcibly stopping sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\"" Oct 31 00:00:43.892346 containerd[1497]: time="2025-10-31T00:00:43.891922699Z" level=info msg="TearDown network for sandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" successfully" Oct 31 00:00:43.896732 containerd[1497]: time="2025-10-31T00:00:43.896669385Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:00:43.897427 containerd[1497]: time="2025-10-31T00:00:43.896745387Z" level=info msg="RemovePodSandbox \"3ca0e67990709c9b7a89574ad8b7dd0e1d8abbcd893a1d35e6321bd15884e2f9\" returns successfully" Oct 31 00:00:43.897427 containerd[1497]: time="2025-10-31T00:00:43.897282076Z" level=info msg="StopPodSandbox for \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\"" Oct 31 00:00:43.897427 containerd[1497]: time="2025-10-31T00:00:43.897361078Z" level=info msg="TearDown network for sandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" successfully" Oct 31 00:00:43.897427 containerd[1497]: time="2025-10-31T00:00:43.897371718Z" level=info msg="StopPodSandbox for \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" returns successfully" Oct 31 00:00:43.898106 containerd[1497]: time="2025-10-31T00:00:43.898070891Z" level=info msg="RemovePodSandbox for \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\"" Oct 31 00:00:43.898257 containerd[1497]: time="2025-10-31T00:00:43.898185453Z" level=info msg="Forcibly stopping sandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\"" Oct 31 00:00:43.898369 containerd[1497]: time="2025-10-31T00:00:43.898242574Z" level=info msg="TearDown network for sandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" successfully" Oct 31 00:00:43.902080 containerd[1497]: time="2025-10-31T00:00:43.902040723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:00:43.902384 containerd[1497]: time="2025-10-31T00:00:43.902246966Z" level=info msg="RemovePodSandbox \"35d7ebff8c841970bd4ed4a079e1febefa412b7e92acba1e0e3f25084385d24e\" returns successfully" Oct 31 00:00:49.533955 systemd[1]: Started sshd@31-91.99.146.238:22-152.89.168.4:58834.service - OpenSSH per-connection server daemon (152.89.168.4:58834). Oct 31 00:00:50.150405 sshd[5438]: Invalid user heitor from 152.89.168.4 port 58834 Oct 31 00:00:50.725134 sshd[5438]: Received disconnect from 152.89.168.4 port 58834:11: Bye Bye [preauth] Oct 31 00:00:50.725134 sshd[5438]: Disconnected from invalid user heitor 152.89.168.4 port 58834 [preauth] Oct 31 00:00:50.728150 systemd[1]: sshd@31-91.99.146.238:22-152.89.168.4:58834.service: Deactivated successfully. Oct 31 00:00:57.593588 systemd[1]: cri-containerd-3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20.scope: Deactivated successfully. Oct 31 00:00:57.594128 systemd[1]: cri-containerd-3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20.scope: Consumed 5.991s CPU time, 57.4M memory peak. Oct 31 00:00:57.620381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20-rootfs.mount: Deactivated successfully. Oct 31 00:00:57.626933 containerd[1497]: time="2025-10-31T00:00:57.626853249Z" level=info msg="shim disconnected" id=3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20 namespace=k8s.io Oct 31 00:00:57.627524 containerd[1497]: time="2025-10-31T00:00:57.626975212Z" level=warning msg="cleaning up after shim disconnected" id=3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20 namespace=k8s.io Oct 31 00:00:57.627524 containerd[1497]: time="2025-10-31T00:00:57.626990092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:57.808127 kubelet[2685]: I1031 00:00:57.807293 2685 scope.go:117] "RemoveContainer" containerID="3155dd1748cb140880c350125fd9f2396fc80d0780ab5ecf3b855641d1465a20" Oct 31 00:00:57.811422 containerd[1497]: time="2025-10-31T00:00:57.811354360Z" level=info msg="CreateContainer within sandbox \"0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 31 00:00:57.827352 containerd[1497]: time="2025-10-31T00:00:57.827214116Z" level=info msg="CreateContainer within sandbox \"0d17389d53f711ab80574bd516a0b6b6b5d0eed4825d890163e69e39ae9167c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0057a2ad0d3c813614e0915782d545c180b3266013830749a127d7779a24d3d6\"" Oct 31 00:00:57.828029 containerd[1497]: time="2025-10-31T00:00:57.827988571Z" level=info msg="StartContainer for \"0057a2ad0d3c813614e0915782d545c180b3266013830749a127d7779a24d3d6\"" Oct 31 00:00:57.862768 systemd[1]: Started cri-containerd-0057a2ad0d3c813614e0915782d545c180b3266013830749a127d7779a24d3d6.scope - libcontainer container 0057a2ad0d3c813614e0915782d545c180b3266013830749a127d7779a24d3d6. Oct 31 00:00:57.906763 containerd[1497]: time="2025-10-31T00:00:57.906706097Z" level=info msg="StartContainer for \"0057a2ad0d3c813614e0915782d545c180b3266013830749a127d7779a24d3d6\" returns successfully" Oct 31 00:00:58.004264 kubelet[2685]: E1031 00:00:58.004026 2685 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58262->10.0.0.2:2379: read: connection timed out" Oct 31 00:00:58.011284 systemd[1]: cri-containerd-2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005.scope: Deactivated successfully. Oct 31 00:00:58.012153 systemd[1]: cri-containerd-2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005.scope: Consumed 4.640s CPU time, 22.4M memory peak. Oct 31 00:00:58.050878 containerd[1497]: time="2025-10-31T00:00:58.050810450Z" level=info msg="shim disconnected" id=2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005 namespace=k8s.io Oct 31 00:00:58.051088 containerd[1497]: time="2025-10-31T00:00:58.050927412Z" level=warning msg="cleaning up after shim disconnected" id=2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005 namespace=k8s.io Oct 31 00:00:58.051088 containerd[1497]: time="2025-10-31T00:00:58.050967773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:00:58.622050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005-rootfs.mount: Deactivated successfully. Oct 31 00:00:58.812933 kubelet[2685]: I1031 00:00:58.812898 2685 scope.go:117] "RemoveContainer" containerID="2c73f779db251d6a984350c3a656af75e9efc83efa5915a373513a39b087e005" Oct 31 00:00:58.815508 containerd[1497]: time="2025-10-31T00:00:58.815443628Z" level=info msg="CreateContainer within sandbox \"f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 31 00:00:58.837767 containerd[1497]: time="2025-10-31T00:00:58.837716154Z" level=info msg="CreateContainer within sandbox \"f5f7892d8486ed11644ace8b3c132814afe1085275b49f5711ee45657cbfca9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8e164f02f35d05666db2921a9f5f8453a6b5cdb3109556214fbdc0dcc1a366df\"" Oct 31 00:00:58.838254 containerd[1497]: time="2025-10-31T00:00:58.838227244Z" level=info msg="StartContainer for \"8e164f02f35d05666db2921a9f5f8453a6b5cdb3109556214fbdc0dcc1a366df\"" Oct 31 00:00:58.877704 systemd[1]: Started cri-containerd-8e164f02f35d05666db2921a9f5f8453a6b5cdb3109556214fbdc0dcc1a366df.scope - libcontainer container 8e164f02f35d05666db2921a9f5f8453a6b5cdb3109556214fbdc0dcc1a366df. Oct 31 00:00:58.920906 containerd[1497]: time="2025-10-31T00:00:58.920841697Z" level=info msg="StartContainer for \"8e164f02f35d05666db2921a9f5f8453a6b5cdb3109556214fbdc0dcc1a366df\" returns successfully" Oct 31 00:01:01.843789 kubelet[2685]: E1031 00:01:01.843360 2685 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58054->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-4-n-ab7d00e960.18736a62b372f7a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-4-n-ab7d00e960,UID:e8698ef9915cad84386c5a5e97817109,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-ab7d00e960,},FirstTimestamp:2025-10-31 00:00:51.381991333 +0000 UTC m=+247.681784309,LastTimestamp:2025-10-31 00:00:51.381991333 +0000 UTC m=+247.681784309,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-ab7d00e960,}"