Apr 30 12:58:32.873855 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 12:58:32.873895 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Apr 29 22:28:35 -00 2025 Apr 30 12:58:32.873908 kernel: KASLR enabled Apr 30 12:58:32.873914 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 12:58:32.873919 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 30 12:58:32.873925 kernel: random: crng init done Apr 30 12:58:32.873932 kernel: secureboot: Secure boot disabled Apr 30 12:58:32.873937 kernel: ACPI: Early table checksum verification disabled Apr 30 12:58:32.873943 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 12:58:32.873951 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 12:58:32.873957 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.873962 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.873968 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.873974 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.873981 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.873989 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.873995 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.874001 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.874007 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:58:32.874035 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 12:58:32.874043 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 12:58:32.874049 kernel: NUMA: Failed to initialise from firmware Apr 30 12:58:32.874055 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 12:58:32.874061 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 12:58:32.874067 kernel: Zone ranges: Apr 30 12:58:32.874075 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 12:58:32.874081 kernel: DMA32 empty Apr 30 12:58:32.874087 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 12:58:32.874093 kernel: Movable zone start for each node Apr 30 12:58:32.874099 kernel: Early memory node ranges Apr 30 12:58:32.874106 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 30 12:58:32.874112 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 30 12:58:32.874118 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 30 12:58:32.874124 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 12:58:32.874130 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 12:58:32.874136 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 12:58:32.874142 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 12:58:32.874150 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 12:58:32.874156 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 12:58:32.874163 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 12:58:32.874172 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 12:58:32.874181 kernel: psci: probing for conduit method from ACPI. Apr 30 12:58:32.874189 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 12:58:32.874197 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 12:58:32.874203 kernel: psci: Trusted OS migration not required Apr 30 12:58:32.874212 kernel: psci: SMC Calling Convention v1.1 Apr 30 12:58:32.874219 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 12:58:32.874225 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 12:58:32.874232 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 12:58:32.874238 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 12:58:32.874245 kernel: Detected PIPT I-cache on CPU0 Apr 30 12:58:32.874251 kernel: CPU features: detected: GIC system register CPU interface Apr 30 12:58:32.874257 kernel: CPU features: detected: Hardware dirty bit management Apr 30 12:58:32.874265 kernel: CPU features: detected: Spectre-v4 Apr 30 12:58:32.874272 kernel: CPU features: detected: Spectre-BHB Apr 30 12:58:32.874278 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 12:58:32.874285 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 12:58:32.874291 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 12:58:32.874298 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 12:58:32.874304 kernel: alternatives: applying boot alternatives Apr 30 12:58:32.874312 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:58:32.874319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:58:32.874325 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:58:32.874332 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:58:32.874340 kernel: Fallback order for Node 0: 0 Apr 30 12:58:32.874346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 12:58:32.874353 kernel: Policy zone: Normal Apr 30 12:58:32.874359 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:58:32.874366 kernel: software IO TLB: area num 2. Apr 30 12:58:32.874372 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 12:58:32.874379 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) Apr 30 12:58:32.874386 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:58:32.874392 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:58:32.874400 kernel: rcu: RCU event tracing is enabled. Apr 30 12:58:32.874406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:58:32.874413 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:58:32.874421 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:58:32.874428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:58:32.874434 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:58:32.874441 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 12:58:32.874447 kernel: GICv3: 256 SPIs implemented Apr 30 12:58:32.874454 kernel: GICv3: 0 Extended SPIs implemented Apr 30 12:58:32.874460 kernel: Root IRQ handler: gic_handle_irq Apr 30 12:58:32.874467 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 12:58:32.874473 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 12:58:32.874480 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 12:58:32.874486 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 12:58:32.874495 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 12:58:32.874501 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 12:58:32.874508 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 12:58:32.874514 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:58:32.874521 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:58:32.874528 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 12:58:32.874535 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 12:58:32.874541 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 12:58:32.874548 kernel: Console: colour dummy device 80x25 Apr 30 12:58:32.874555 kernel: ACPI: Core revision 20230628 Apr 30 12:58:32.874562 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 12:58:32.874570 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:58:32.874577 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:58:32.874584 kernel: landlock: Up and running. Apr 30 12:58:32.874590 kernel: SELinux: Initializing. Apr 30 12:58:32.874597 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:58:32.874604 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:58:32.874610 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:58:32.874617 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:58:32.874624 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:58:32.874632 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:58:32.874639 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 12:58:32.874646 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 12:58:32.874653 kernel: Remapping and enabling EFI services. Apr 30 12:58:32.874659 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:58:32.874666 kernel: Detected PIPT I-cache on CPU1 Apr 30 12:58:32.874672 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 12:58:32.874679 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 12:58:32.874686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:58:32.874694 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 12:58:32.874701 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:58:32.874712 kernel: SMP: Total of 2 processors activated. Apr 30 12:58:32.874721 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 12:58:32.874728 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 12:58:32.874735 kernel: CPU features: detected: Common not Private translations Apr 30 12:58:32.874742 kernel: CPU features: detected: CRC32 instructions Apr 30 12:58:32.874749 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 12:58:32.874756 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 12:58:32.874764 kernel: CPU features: detected: LSE atomic instructions Apr 30 12:58:32.874772 kernel: CPU features: detected: Privileged Access Never Apr 30 12:58:32.874779 kernel: CPU features: detected: RAS Extension Support Apr 30 12:58:32.874786 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 12:58:32.874792 kernel: CPU: All CPU(s) started at EL1 Apr 30 12:58:32.874800 kernel: alternatives: applying system-wide alternatives Apr 30 12:58:32.874807 kernel: devtmpfs: initialized Apr 30 12:58:32.874814 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:58:32.874823 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:58:32.874830 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:58:32.874837 kernel: SMBIOS 3.0.0 present. Apr 30 12:58:32.874844 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 12:58:32.874851 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:58:32.874858 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 12:58:32.874865 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 12:58:32.874872 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 12:58:32.874886 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:58:32.874896 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Apr 30 12:58:32.874903 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:58:32.874910 kernel: cpuidle: using governor menu Apr 30 12:58:32.874917 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 12:58:32.874924 kernel: ASID allocator initialised with 32768 entries Apr 30 12:58:32.874931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:58:32.874939 kernel: Serial: AMBA PL011 UART driver Apr 30 12:58:32.874946 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 12:58:32.874953 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 12:58:32.874961 kernel: Modules: 509264 pages in range for PLT usage Apr 30 12:58:32.874968 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:58:32.874975 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:58:32.874982 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 12:58:32.874989 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 12:58:32.874996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:58:32.875003 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:58:32.875010 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 12:58:32.876612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 12:58:32.876629 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:58:32.876637 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:58:32.876645 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:58:32.876652 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:58:32.876659 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:58:32.876666 kernel: ACPI: Interpreter enabled Apr 30 12:58:32.876676 kernel: ACPI: Using GIC for interrupt routing Apr 30 12:58:32.876684 kernel: ACPI: MCFG table detected, 1 entries Apr 30 12:58:32.876693 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 12:58:32.876702 kernel: printk: console [ttyAMA0] enabled Apr 30 12:58:32.876710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:58:32.876858 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:58:32.876958 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 12:58:32.877082 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 12:58:32.877159 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 12:58:32.877225 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 12:58:32.877238 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 12:58:32.877245 kernel: PCI host bridge to bus 0000:00 Apr 30 12:58:32.877320 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 12:58:32.877379 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 12:58:32.877436 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 12:58:32.877492 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:58:32.877576 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 12:58:32.877655 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 12:58:32.877721 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 12:58:32.877800 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 12:58:32.877945 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878065 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 12:58:32.878151 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878227 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 12:58:32.878301 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878369 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 12:58:32.878441 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878508 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 12:58:32.878586 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878670 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 12:58:32.878756 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878828 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 12:58:32.878927 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.878999 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 12:58:32.881205 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.881295 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 12:58:32.881380 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:58:32.881449 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 12:58:32.881555 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 12:58:32.881626 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 12:58:32.881704 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:58:32.881771 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 12:58:32.881844 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 12:58:32.881938 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 12:58:32.882029 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 12:58:32.882103 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 12:58:32.882181 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 12:58:32.882248 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 12:58:32.882314 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 12:58:32.882393 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 12:58:32.882462 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 12:58:32.882546 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 12:58:32.882614 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 12:58:32.882680 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 12:58:32.882753 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 12:58:32.882837 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 12:58:32.882946 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 12:58:32.884221 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:58:32.884328 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 12:58:32.884395 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 12:58:32.884459 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 12:58:32.884534 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 12:58:32.884598 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 12:58:32.884663 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 12:58:32.884730 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 12:58:32.884793 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 12:58:32.884856 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 12:58:32.884970 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 12:58:32.886122 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 12:58:32.886208 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 12:58:32.886281 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 12:58:32.886346 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 12:58:32.886411 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 12:58:32.886482 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 12:58:32.886557 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 12:58:32.886622 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 12:58:32.886696 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 12:58:32.886760 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 12:58:32.886823 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 12:58:32.886934 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 12:58:32.887009 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 12:58:32.887099 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 12:58:32.887167 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 12:58:32.887237 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 12:58:32.887300 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 12:58:32.887368 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 12:58:32.887432 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 12:58:32.887496 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 12:58:32.887561 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 12:58:32.887625 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:58:32.887692 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 12:58:32.887759 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:58:32.887824 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 12:58:32.887899 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:58:32.887969 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 12:58:32.889296 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:58:32.889401 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 12:58:32.889479 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:58:32.889551 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 12:58:32.889615 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:58:32.889682 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 12:58:32.889746 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:58:32.889812 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 12:58:32.889906 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:58:32.889997 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 12:58:32.891220 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:58:32.891305 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 12:58:32.891369 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 12:58:32.891437 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 12:58:32.891500 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 12:58:32.891566 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 12:58:32.891630 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 12:58:32.891701 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 12:58:32.891763 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 12:58:32.891828 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 12:58:32.891933 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 12:58:32.892011 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 12:58:32.893218 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 12:58:32.893311 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 12:58:32.893382 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 12:58:32.893455 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 12:58:32.893520 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 12:58:32.893586 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 12:58:32.893650 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 12:58:32.893716 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 12:58:32.893779 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 12:58:32.893852 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 12:58:32.893947 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 12:58:32.894055 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 12:58:32.894128 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 12:58:32.894195 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 12:58:32.894260 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 12:58:32.894324 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 12:58:32.894387 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:58:32.894469 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 12:58:32.894542 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 12:58:32.894607 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 12:58:32.894671 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 12:58:32.894733 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:58:32.894807 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 12:58:32.894876 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 12:58:32.894990 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 12:58:32.896471 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 12:58:32.896557 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 12:58:32.896622 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:58:32.896695 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 12:58:32.896766 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 12:58:32.896831 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 12:58:32.896950 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 12:58:32.897116 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:58:32.897201 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 12:58:32.897267 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 12:58:32.897334 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 12:58:32.897396 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 12:58:32.897458 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 12:58:32.897520 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:58:32.897597 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 12:58:32.897665 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 12:58:32.897730 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 12:58:32.897793 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 12:58:32.897855 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 12:58:32.897931 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:58:32.898004 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 12:58:32.898082 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 12:58:32.898154 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 12:58:32.898219 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 12:58:32.898283 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 12:58:32.898345 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 12:58:32.898408 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:58:32.898473 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 12:58:32.898535 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 12:58:32.898602 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 12:58:32.898665 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:58:32.898730 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 12:58:32.898794 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 12:58:32.898857 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 12:58:32.898935 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:58:32.899004 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 12:58:32.899130 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 12:58:32.899193 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 12:58:32.899262 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 12:58:32.899322 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 12:58:32.899382 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:58:32.899448 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 12:58:32.899506 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 12:58:32.899568 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:58:32.899646 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 12:58:32.899711 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 12:58:32.899770 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:58:32.901139 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 12:58:32.901236 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 12:58:32.901298 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:58:32.901375 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 12:58:32.901435 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 12:58:32.901495 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:58:32.901568 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 12:58:32.901631 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 12:58:32.901690 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:58:32.901755 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 12:58:32.901813 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 12:58:32.901871 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:58:32.901970 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 12:58:32.902048 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 12:58:32.902114 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:58:32.902180 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 12:58:32.902240 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 12:58:32.902298 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:58:32.902307 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 12:58:32.902315 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 12:58:32.902323 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 12:58:32.902331 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 12:58:32.902341 kernel: iommu: Default domain type: Translated Apr 30 12:58:32.902349 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 12:58:32.902356 kernel: efivars: Registered efivars operations Apr 30 12:58:32.902364 kernel: vgaarb: loaded Apr 30 12:58:32.902373 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 12:58:32.902380 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:58:32.902388 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:58:32.902396 kernel: pnp: PnP ACPI init Apr 30 12:58:32.902467 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 12:58:32.902480 kernel: pnp: PnP ACPI: found 1 devices Apr 30 12:58:32.902488 kernel: NET: Registered PF_INET protocol family Apr 30 12:58:32.902496 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:58:32.902504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:58:32.902511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:58:32.902519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:58:32.902527 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:58:32.902534 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:58:32.902543 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:58:32.902551 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:58:32.902558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:58:32.902632 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 12:58:32.902643 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:58:32.902651 kernel: kvm [1]: HYP mode not available Apr 30 12:58:32.902658 kernel: Initialise system trusted keyrings Apr 30 12:58:32.902666 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:58:32.902673 kernel: Key type asymmetric registered Apr 30 12:58:32.902682 kernel: Asymmetric key parser 'x509' registered Apr 30 12:58:32.902690 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 12:58:32.902698 kernel: io scheduler mq-deadline registered Apr 30 12:58:32.902705 kernel: io scheduler kyber registered Apr 30 12:58:32.902713 kernel: io scheduler bfq registered Apr 30 12:58:32.902721 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 12:58:32.902789 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 12:58:32.902855 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 12:58:32.902964 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.904118 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 12:58:32.904207 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 12:58:32.904274 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.904342 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 12:58:32.904408 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 12:58:32.904479 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.904547 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 12:58:32.904612 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 12:58:32.904676 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.904741 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 12:58:32.904805 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 12:58:32.904872 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.904995 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 12:58:32.905094 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 12:58:32.905161 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.905230 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 12:58:32.905295 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 12:58:32.905367 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.905436 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 12:58:32.905501 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 12:58:32.905565 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.905575 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 12:58:32.905640 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 12:58:32.905707 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 12:58:32.905770 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:58:32.905780 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 12:58:32.905788 kernel: ACPI: button: Power Button [PWRB] Apr 30 12:58:32.905796 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 12:58:32.905865 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 12:58:32.905952 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 12:58:32.905964 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:58:32.905975 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 12:58:32.907250 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 12:58:32.907273 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 12:58:32.907281 kernel: thunder_xcv, ver 1.0 Apr 30 12:58:32.907289 kernel: thunder_bgx, ver 1.0 Apr 30 12:58:32.907296 kernel: nicpf, ver 1.0 Apr 30 12:58:32.907304 kernel: nicvf, ver 1.0 Apr 30 12:58:32.907388 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 12:58:32.907449 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T12:58:32 UTC (1746017912) Apr 30 12:58:32.907466 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:58:32.907473 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 12:58:32.907481 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 12:58:32.907489 kernel: watchdog: Hard watchdog permanently disabled Apr 30 12:58:32.907496 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:58:32.907504 kernel: Segment Routing with IPv6 Apr 30 12:58:32.907511 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:58:32.907519 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:58:32.907528 kernel: Key type dns_resolver registered Apr 30 12:58:32.907536 kernel: registered taskstats version 1 Apr 30 12:58:32.907543 kernel: Loading compiled-in X.509 certificates Apr 30 12:58:32.907551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4e3d8be893bce81adbd52ab54fa98214a1a14a2e' Apr 30 12:58:32.907558 kernel: Key type .fscrypt registered Apr 30 12:58:32.907565 kernel: Key type fscrypt-provisioning registered Apr 30 12:58:32.907573 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:58:32.907581 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:58:32.907588 kernel: ima: No architecture policies found Apr 30 12:58:32.907597 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 12:58:32.907605 kernel: clk: Disabling unused clocks Apr 30 12:58:32.907613 kernel: Freeing unused kernel memory: 38336K Apr 30 12:58:32.907620 kernel: Run /init as init process Apr 30 12:58:32.907627 kernel: with arguments: Apr 30 12:58:32.907635 kernel: /init Apr 30 12:58:32.907642 kernel: with environment: Apr 30 12:58:32.907649 kernel: HOME=/ Apr 30 12:58:32.907656 kernel: TERM=linux Apr 30 12:58:32.907666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:58:32.907674 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:58:32.907685 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:58:32.907694 systemd[1]: Detected virtualization kvm. Apr 30 12:58:32.907701 systemd[1]: Detected architecture arm64. Apr 30 12:58:32.907709 systemd[1]: Running in initrd. Apr 30 12:58:32.907717 systemd[1]: No hostname configured, using default hostname. Apr 30 12:58:32.907727 systemd[1]: Hostname set to . Apr 30 12:58:32.907734 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:58:32.907743 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:58:32.907752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:58:32.907761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:58:32.907770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:58:32.907778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:58:32.907786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:58:32.907796 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:58:32.907805 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:58:32.907813 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:58:32.907821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:58:32.907829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:58:32.907837 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:58:32.907845 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:58:32.907855 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:58:32.907863 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:58:32.907871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:58:32.907916 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:58:32.907926 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:58:32.907934 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:58:32.907942 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:58:32.907950 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:58:32.907959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:58:32.907970 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:58:32.907978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:58:32.907986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:58:32.907994 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:58:32.908002 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:58:32.908010 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:58:32.908030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:58:32.908039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:58:32.908049 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:58:32.908086 systemd-journald[238]: Collecting audit messages is disabled. Apr 30 12:58:32.908106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:58:32.908117 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:58:32.908125 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:58:32.908134 kernel: Bridge firewalling registered Apr 30 12:58:32.908142 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:58:32.908150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:58:32.908159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:58:32.908169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:58:32.908178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:58:32.908186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:58:32.908195 systemd-journald[238]: Journal started Apr 30 12:58:32.908215 systemd-journald[238]: Runtime Journal (/run/log/journal/6e551243bb8b47cda5d1ab957ce63a41) is 8M, max 76.6M, 68.6M free. Apr 30 12:58:32.862083 systemd-modules-load[239]: Inserted module 'overlay' Apr 30 12:58:32.881861 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 30 12:58:32.917053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:58:32.919042 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:58:32.929256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:58:32.930314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:58:32.934742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:58:32.935745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:58:32.943509 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:58:32.944409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:58:32.948214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:58:32.963029 dracut-cmdline[273]: dracut-dracut-053 Apr 30 12:58:32.963653 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:58:32.987848 systemd-resolved[276]: Positive Trust Anchors: Apr 30 12:58:32.987865 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:58:32.987917 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:58:32.994273 systemd-resolved[276]: Defaulting to hostname 'linux'. Apr 30 12:58:32.996150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:58:32.997385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:58:33.059081 kernel: SCSI subsystem initialized Apr 30 12:58:33.062039 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:58:33.070057 kernel: iscsi: registered transport (tcp) Apr 30 12:58:33.084040 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:58:33.084101 kernel: QLogic iSCSI HBA Driver Apr 30 12:58:33.128845 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:58:33.134344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:58:33.155352 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:58:33.155443 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:58:33.155472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:58:33.208104 kernel: raid6: neonx8 gen() 15518 MB/s Apr 30 12:58:33.225092 kernel: raid6: neonx4 gen() 15537 MB/s Apr 30 12:58:33.242078 kernel: raid6: neonx2 gen() 13110 MB/s Apr 30 12:58:33.259257 kernel: raid6: neonx1 gen() 10457 MB/s Apr 30 12:58:33.276067 kernel: raid6: int64x8 gen() 6714 MB/s Apr 30 12:58:33.293078 kernel: raid6: int64x4 gen() 7205 MB/s Apr 30 12:58:33.310078 kernel: raid6: int64x2 gen() 6001 MB/s Apr 30 12:58:33.327089 kernel: raid6: int64x1 gen() 4974 MB/s Apr 30 12:58:33.327161 kernel: raid6: using algorithm neonx4 gen() 15537 MB/s Apr 30 12:58:33.344086 kernel: raid6: .... xor() 12215 MB/s, rmw enabled Apr 30 12:58:33.344169 kernel: raid6: using neon recovery algorithm Apr 30 12:58:33.349163 kernel: xor: measuring software checksum speed Apr 30 12:58:33.349237 kernel: 8regs : 21658 MB/sec Apr 30 12:58:33.349269 kernel: 32regs : 21710 MB/sec Apr 30 12:58:33.349289 kernel: arm64_neon : 27832 MB/sec Apr 30 12:58:33.350058 kernel: xor: using function: arm64_neon (27832 MB/sec) Apr 30 12:58:33.400083 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:58:33.413094 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:58:33.420395 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:58:33.452077 systemd-udevd[458]: Using default interface naming scheme 'v255'. Apr 30 12:58:33.456045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:58:33.465288 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:58:33.482584 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Apr 30 12:58:33.521777 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:58:33.527220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:58:33.576389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:58:33.588332 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:58:33.606043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:58:33.609582 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:58:33.611581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:58:33.613573 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:58:33.620989 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:58:33.649918 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:58:33.671152 kernel: scsi host0: Virtio SCSI HBA Apr 30 12:58:33.671924 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 12:58:33.672616 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 12:58:33.722349 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 12:58:33.726236 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 12:58:33.726384 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:58:33.726394 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 12:58:33.732943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:58:33.734076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:58:33.739089 kernel: ACPI: bus type USB registered Apr 30 12:58:33.739113 kernel: usbcore: registered new interface driver usbfs Apr 30 12:58:33.739123 kernel: usbcore: registered new interface driver hub Apr 30 12:58:33.739132 kernel: usbcore: registered new device driver usb Apr 30 12:58:33.736065 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:58:33.740448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:58:33.740629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:58:33.742780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:58:33.751303 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 12:58:33.765030 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 12:58:33.765203 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 12:58:33.765290 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 12:58:33.765370 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 12:58:33.765454 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:58:33.765476 kernel: GPT:17805311 != 80003071 Apr 30 12:58:33.765487 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:58:33.765496 kernel: GPT:17805311 != 80003071 Apr 30 12:58:33.765505 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:58:33.765516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:58:33.765526 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 12:58:33.751765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:58:33.775392 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:58:33.792157 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 12:58:33.792280 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 12:58:33.792382 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:58:33.792862 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 12:58:33.793317 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 12:58:33.793460 kernel: hub 1-0:1.0: USB hub found Apr 30 12:58:33.793564 kernel: hub 1-0:1.0: 4 ports detected Apr 30 12:58:33.793651 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 12:58:33.793743 kernel: hub 2-0:1.0: USB hub found Apr 30 12:58:33.793829 kernel: hub 2-0:1.0: 4 ports detected Apr 30 12:58:33.782310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:58:33.791535 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:58:33.827136 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (521) Apr 30 12:58:33.836606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 12:58:33.838727 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:58:33.851063 kernel: BTRFS: device fsid 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (515) Apr 30 12:58:33.861006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:58:33.877201 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 12:58:33.890547 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 12:58:33.891262 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 12:58:33.900269 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:58:33.908646 disk-uuid[575]: Primary Header is updated. Apr 30 12:58:33.908646 disk-uuid[575]: Secondary Entries is updated. Apr 30 12:58:33.908646 disk-uuid[575]: Secondary Header is updated. Apr 30 12:58:33.917045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:58:34.027225 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 12:58:34.269157 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 12:58:34.406271 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 12:58:34.406341 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 12:58:34.409048 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 12:58:34.462323 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 12:58:34.463246 kernel: usbcore: registered new interface driver usbhid Apr 30 12:58:34.463287 kernel: usbhid: USB HID core driver Apr 30 12:58:34.931041 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:58:34.931099 disk-uuid[576]: The operation has completed successfully. Apr 30 12:58:35.011865 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:58:35.011975 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:58:35.038261 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:58:35.044945 sh[592]: Success Apr 30 12:58:35.058078 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 12:58:35.111494 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:58:35.125248 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:58:35.128264 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:58:35.150453 kernel: BTRFS info (device dm-0): first mount of filesystem 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 Apr 30 12:58:35.150527 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:58:35.150551 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:58:35.151951 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:58:35.151997 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:58:35.158070 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:58:35.160630 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:58:35.162989 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:58:35.168261 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:58:35.172298 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:58:35.190265 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:58:35.190326 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:58:35.190337 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:58:35.194043 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:58:35.194105 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:58:35.199064 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:58:35.201575 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:58:35.207270 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:58:35.294058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:58:35.304223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:58:35.310524 ignition[668]: Ignition 2.20.0 Apr 30 12:58:35.311205 ignition[668]: Stage: fetch-offline Apr 30 12:58:35.311630 ignition[668]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:35.311641 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:35.311817 ignition[668]: parsed url from cmdline: "" Apr 30 12:58:35.311820 ignition[668]: no config URL provided Apr 30 12:58:35.311826 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:58:35.311834 ignition[668]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:58:35.311840 ignition[668]: failed to fetch config: resource requires networking Apr 30 12:58:35.315258 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:58:35.313532 ignition[668]: Ignition finished successfully Apr 30 12:58:35.329794 systemd-networkd[775]: lo: Link UP Apr 30 12:58:35.329803 systemd-networkd[775]: lo: Gained carrier Apr 30 12:58:35.331661 systemd-networkd[775]: Enumeration completed Apr 30 12:58:35.332010 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:58:35.332635 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:35.332639 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:58:35.333279 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:35.333282 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:58:35.333810 systemd-networkd[775]: eth0: Link UP Apr 30 12:58:35.333813 systemd-networkd[775]: eth0: Gained carrier Apr 30 12:58:35.333821 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:35.336360 systemd[1]: Reached target network.target - Network. Apr 30 12:58:35.341597 systemd-networkd[775]: eth1: Link UP Apr 30 12:58:35.341601 systemd-networkd[775]: eth1: Gained carrier Apr 30 12:58:35.341609 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:35.352350 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:58:35.366296 ignition[779]: Ignition 2.20.0 Apr 30 12:58:35.366307 ignition[779]: Stage: fetch Apr 30 12:58:35.366508 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:35.366518 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:35.366605 ignition[779]: parsed url from cmdline: "" Apr 30 12:58:35.366608 ignition[779]: no config URL provided Apr 30 12:58:35.366613 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:58:35.366619 ignition[779]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:58:35.366703 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 12:58:35.367587 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 12:58:35.372342 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:58:35.422263 systemd-networkd[775]: eth0: DHCPv4 address 91.99.82.124/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:58:35.568603 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 12:58:35.576794 ignition[779]: GET result: OK Apr 30 12:58:35.576979 ignition[779]: parsing config with SHA512: 4ce5c6340430eb7f7a12c7ed699ab10b70a832f645d643a97bf26e89699c8434c7dfa08253957059085c392ee04c853634df9ea9dbdf73dd1644b6dcf54708be Apr 30 12:58:35.583715 unknown[779]: fetched base config from "system" Apr 30 12:58:35.583725 unknown[779]: fetched base config from "system" Apr 30 12:58:35.584231 ignition[779]: fetch: fetch complete Apr 30 12:58:35.583730 unknown[779]: fetched user config from "hetzner" Apr 30 12:58:35.584236 ignition[779]: fetch: fetch passed Apr 30 12:58:35.586519 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:58:35.584293 ignition[779]: Ignition finished successfully Apr 30 12:58:35.594957 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:58:35.610208 ignition[786]: Ignition 2.20.0 Apr 30 12:58:35.610220 ignition[786]: Stage: kargs Apr 30 12:58:35.610418 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:35.610427 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:35.615027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:58:35.611420 ignition[786]: kargs: kargs passed Apr 30 12:58:35.611473 ignition[786]: Ignition finished successfully Apr 30 12:58:35.622508 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:58:35.634943 ignition[792]: Ignition 2.20.0 Apr 30 12:58:35.634960 ignition[792]: Stage: disks Apr 30 12:58:35.635234 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:35.635244 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:35.637110 ignition[792]: disks: disks passed Apr 30 12:58:35.638289 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:58:35.637187 ignition[792]: Ignition finished successfully Apr 30 12:58:35.639809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:58:35.641502 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:58:35.643370 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:58:35.644312 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:58:35.645575 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:58:35.652269 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:58:35.668427 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 12:58:35.672514 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:58:35.679169 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:58:35.735055 kernel: EXT4-fs (sda9): mounted filesystem 597557b0-8ae6-4a5a-8e98-f3f884fcfe65 r/w with ordered data mode. Quota mode: none. Apr 30 12:58:35.736680 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:58:35.737280 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:58:35.748285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:58:35.753348 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:58:35.757437 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 12:58:35.760379 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:58:35.761684 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:58:35.768213 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (808) Apr 30 12:58:35.772040 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:58:35.772104 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:58:35.772402 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:58:35.775360 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:58:35.781334 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:58:35.786717 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:58:35.786774 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:58:35.791544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:58:35.834661 coreos-metadata[810]: Apr 30 12:58:35.834 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 12:58:35.836718 coreos-metadata[810]: Apr 30 12:58:35.836 INFO Fetch successful Apr 30 12:58:35.837773 coreos-metadata[810]: Apr 30 12:58:35.837 INFO wrote hostname ci-4230-1-1-f-bd31e1b44e to /sysroot/etc/hostname Apr 30 12:58:35.840276 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:58:35.841969 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:58:35.846170 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:58:35.851087 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:58:35.855665 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:58:35.955042 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:58:35.960170 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:58:35.964070 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:58:35.972057 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:58:35.998317 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:58:35.998891 ignition[925]: INFO : Ignition 2.20.0 Apr 30 12:58:35.998891 ignition[925]: INFO : Stage: mount Apr 30 12:58:35.998891 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:35.998891 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:36.001842 ignition[925]: INFO : mount: mount passed Apr 30 12:58:36.001842 ignition[925]: INFO : Ignition finished successfully Apr 30 12:58:36.002852 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:58:36.011188 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:58:36.151968 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:58:36.160247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:58:36.171066 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (937) Apr 30 12:58:36.173111 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:58:36.173166 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:58:36.173189 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:58:36.176051 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:58:36.176127 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:58:36.179283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:58:36.208946 ignition[954]: INFO : Ignition 2.20.0 Apr 30 12:58:36.208946 ignition[954]: INFO : Stage: files Apr 30 12:58:36.210174 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:36.210174 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:36.212106 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:58:36.212106 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:58:36.212106 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:58:36.215623 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:58:36.216596 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:58:36.216596 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:58:36.216195 unknown[954]: wrote ssh authorized keys file for user: core Apr 30 12:58:36.219769 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:58:36.219769 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 12:58:36.389350 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:58:36.440428 systemd-networkd[775]: eth1: Gained IPv6LL Apr 30 12:58:36.638538 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:58:36.639802 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:58:36.639802 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 12:58:37.209325 systemd-networkd[775]: eth0: Gained IPv6LL Apr 30 12:58:37.232424 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:58:37.309343 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:58:37.310615 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 12:58:37.826626 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:58:38.077992 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:58:38.077992 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:58:38.080685 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:58:38.080685 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:58:38.080685 ignition[954]: INFO : files: files passed Apr 30 12:58:38.080685 ignition[954]: INFO : Ignition finished successfully Apr 30 12:58:38.082092 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:58:38.090217 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:58:38.094563 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:58:38.099718 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:58:38.100429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:58:38.109725 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:58:38.109725 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:58:38.112539 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:58:38.114966 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:58:38.117575 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:58:38.127344 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:58:38.157555 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:58:38.158652 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:58:38.160786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:58:38.161801 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:58:38.164503 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:58:38.170383 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:58:38.189942 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:58:38.196301 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:58:38.208074 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:58:38.209796 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:58:38.211740 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:58:38.212778 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:58:38.213075 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:58:38.214713 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:58:38.216064 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:58:38.217060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:58:38.218193 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:58:38.219335 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:58:38.220477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:58:38.221581 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:58:38.222788 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:58:38.224031 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:58:38.225072 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:58:38.226053 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:58:38.226182 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:58:38.227495 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:58:38.228219 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:58:38.229319 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:58:38.229393 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:58:38.230649 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:58:38.230775 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:58:38.232406 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:58:38.232538 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:58:38.233909 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:58:38.234012 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:58:38.234962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 12:58:38.235088 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:58:38.244763 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:58:38.251311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:58:38.252392 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:58:38.253268 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:58:38.255504 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:58:38.255616 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:58:38.259924 ignition[1006]: INFO : Ignition 2.20.0 Apr 30 12:58:38.259924 ignition[1006]: INFO : Stage: umount Apr 30 12:58:38.259924 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:58:38.259924 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:58:38.262712 ignition[1006]: INFO : umount: umount passed Apr 30 12:58:38.262712 ignition[1006]: INFO : Ignition finished successfully Apr 30 12:58:38.269212 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:58:38.271058 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:58:38.277962 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:58:38.278317 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:58:38.285580 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:58:38.285691 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:58:38.288038 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:58:38.288136 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:58:38.289440 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:58:38.289484 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:58:38.290467 systemd[1]: Stopped target network.target - Network. Apr 30 12:58:38.291462 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:58:38.291519 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:58:38.293693 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:58:38.294600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:58:38.299171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:58:38.299887 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:58:38.302510 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:58:38.304172 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:58:38.304227 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:58:38.305751 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:58:38.305795 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:58:38.306958 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:58:38.307026 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:58:38.308642 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:58:38.308696 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:58:38.310741 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:58:38.312696 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:58:38.314976 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:58:38.315574 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:58:38.315670 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:58:38.317067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:58:38.317169 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:58:38.321260 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:58:38.321390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:58:38.325344 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:58:38.325575 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:58:38.325613 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:58:38.327897 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:58:38.328930 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:58:38.329326 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:58:38.332001 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:58:38.332236 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:58:38.332265 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:58:38.345728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:58:38.346999 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:58:38.347135 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:58:38.348287 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:58:38.348344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:58:38.351116 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:58:38.351186 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:58:38.351794 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:58:38.354684 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:58:38.368248 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:58:38.369299 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:58:38.373431 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:58:38.373614 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:58:38.376581 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:58:38.376668 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:58:38.378370 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:58:38.378415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:58:38.379799 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:58:38.379878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:58:38.381472 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:58:38.381520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:58:38.382943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:58:38.382997 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:58:38.390238 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:58:38.390832 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:58:38.390893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:58:38.393446 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 12:58:38.393497 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:58:38.395652 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:58:38.395704 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:58:38.396641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:58:38.396690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:58:38.399941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:58:38.400050 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:58:38.403318 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:58:38.410290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:58:38.419924 systemd[1]: Switching root. Apr 30 12:58:38.446400 systemd-journald[238]: Journal stopped Apr 30 12:58:39.457983 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 30 12:58:39.466451 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:58:39.466475 kernel: SELinux: policy capability open_perms=1 Apr 30 12:58:39.466485 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:58:39.466498 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:58:39.466516 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:58:39.466526 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:58:39.466535 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:58:39.466544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:58:39.466553 kernel: audit: type=1403 audit(1746017918.600:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:58:39.466565 systemd[1]: Successfully loaded SELinux policy in 34.305ms. Apr 30 12:58:39.466588 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.309ms. Apr 30 12:58:39.466599 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:58:39.466610 systemd[1]: Detected virtualization kvm. Apr 30 12:58:39.466621 systemd[1]: Detected architecture arm64. Apr 30 12:58:39.466631 systemd[1]: Detected first boot. Apr 30 12:58:39.466642 systemd[1]: Hostname set to . Apr 30 12:58:39.466652 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:58:39.466662 zram_generator::config[1052]: No configuration found. Apr 30 12:58:39.466672 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:58:39.466682 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:58:39.466692 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:58:39.466705 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:58:39.466716 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:58:39.466725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:58:39.466737 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:58:39.466747 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:58:39.466757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:58:39.466766 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:58:39.466781 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:58:39.466811 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:58:39.466824 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:58:39.466834 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:58:39.466844 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:58:39.466860 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:58:39.466870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:58:39.466880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:58:39.466890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:58:39.466900 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:58:39.466913 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 12:58:39.466923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:58:39.466933 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:58:39.466944 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:58:39.466954 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:58:39.466964 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:58:39.466978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:58:39.466991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:58:39.467003 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:58:39.467041 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:58:39.467054 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:58:39.467064 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:58:39.467074 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:58:39.467084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:58:39.467101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:58:39.467111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:58:39.467122 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:58:39.467132 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:58:39.467142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:58:39.467152 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:58:39.467162 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:58:39.467173 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:58:39.467183 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:58:39.467194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:58:39.467205 systemd[1]: Reached target machines.target - Containers. Apr 30 12:58:39.467215 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:58:39.467225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:58:39.467235 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:58:39.467245 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:58:39.467257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:58:39.467267 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:58:39.467277 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:58:39.467287 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:58:39.467297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:58:39.467308 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:58:39.467319 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:58:39.467329 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:58:39.467341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:58:39.467353 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:58:39.467364 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:58:39.467374 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:58:39.467384 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:58:39.467396 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:58:39.467407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:58:39.467417 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:58:39.467427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:58:39.467437 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:58:39.467447 systemd[1]: Stopped verity-setup.service. Apr 30 12:58:39.467457 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:58:39.467467 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:58:39.467477 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:58:39.467489 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:58:39.467499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:58:39.467510 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:58:39.467520 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:58:39.467531 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:58:39.467541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:58:39.467553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:58:39.467563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:58:39.467572 kernel: ACPI: bus type drm_connector registered Apr 30 12:58:39.467582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:58:39.467592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:58:39.467602 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:58:39.467612 kernel: loop: module loaded Apr 30 12:58:39.467621 kernel: fuse: init (API version 7.39) Apr 30 12:58:39.467632 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:58:39.467643 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:58:39.467653 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:58:39.467666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:58:39.467675 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:58:39.467686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:58:39.467696 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:58:39.467707 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:58:39.467718 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:58:39.467729 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:58:39.467740 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:58:39.467817 systemd-journald[1120]: Collecting audit messages is disabled. Apr 30 12:58:39.467847 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:58:39.467859 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:58:39.467872 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:58:39.467882 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:58:39.467896 systemd-journald[1120]: Journal started Apr 30 12:58:39.467918 systemd-journald[1120]: Runtime Journal (/run/log/journal/6e551243bb8b47cda5d1ab957ce63a41) is 8M, max 76.6M, 68.6M free. Apr 30 12:58:39.164870 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:58:39.177285 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 12:58:39.178147 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:58:39.479605 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:58:39.486182 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:58:39.486248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:58:39.500298 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:58:39.500389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:58:39.510092 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:58:39.514058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:58:39.516399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:58:39.529749 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:58:39.536297 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:58:39.542132 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:58:39.542236 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:58:39.543231 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:58:39.544311 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:58:39.545720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:58:39.546909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:58:39.551063 kernel: loop0: detected capacity change from 0 to 123192 Apr 30 12:58:39.556288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:58:39.587553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:58:39.593137 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:58:39.598238 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:58:39.600970 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:58:39.606226 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:58:39.609556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:58:39.621411 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:58:39.631685 systemd-journald[1120]: Time spent on flushing to /var/log/journal/6e551243bb8b47cda5d1ab957ce63a41 is 38.326ms for 1151 entries. Apr 30 12:58:39.631685 systemd-journald[1120]: System Journal (/var/log/journal/6e551243bb8b47cda5d1ab957ce63a41) is 8M, max 584.8M, 576.8M free. Apr 30 12:58:39.678953 kernel: loop1: detected capacity change from 0 to 194096 Apr 30 12:58:39.678980 systemd-journald[1120]: Received client request to flush runtime journal. Apr 30 12:58:39.632176 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Apr 30 12:58:39.632186 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Apr 30 12:58:39.641651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:58:39.652209 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:58:39.663044 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 12:58:39.683063 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:58:39.686159 kernel: loop2: detected capacity change from 0 to 8 Apr 30 12:58:39.704607 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:58:39.711929 kernel: loop3: detected capacity change from 0 to 113512 Apr 30 12:58:39.714482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:58:39.755740 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 30 12:58:39.755761 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 30 12:58:39.762056 kernel: loop4: detected capacity change from 0 to 123192 Apr 30 12:58:39.765654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:58:39.780081 kernel: loop5: detected capacity change from 0 to 194096 Apr 30 12:58:39.809577 kernel: loop6: detected capacity change from 0 to 8 Apr 30 12:58:39.813072 kernel: loop7: detected capacity change from 0 to 113512 Apr 30 12:58:39.832754 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 12:58:39.834553 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 30 12:58:39.842987 systemd[1]: Reload requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:58:39.843477 systemd[1]: Reloading... Apr 30 12:58:39.954139 zram_generator::config[1229]: No configuration found. Apr 30 12:58:40.010073 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:58:40.099429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:58:40.160850 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:58:40.161374 systemd[1]: Reloading finished in 317 ms. Apr 30 12:58:40.189440 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:58:40.192103 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:58:40.216249 systemd[1]: Starting ensure-sysext.service... Apr 30 12:58:40.223146 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:58:40.240193 systemd[1]: Reload requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:58:40.240214 systemd[1]: Reloading... Apr 30 12:58:40.245847 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:58:40.246166 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:58:40.247534 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:58:40.248172 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 30 12:58:40.248455 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 30 12:58:40.252815 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:58:40.252925 systemd-tmpfiles[1267]: Skipping /boot Apr 30 12:58:40.264244 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:58:40.264382 systemd-tmpfiles[1267]: Skipping /boot Apr 30 12:58:40.321614 zram_generator::config[1296]: No configuration found. Apr 30 12:58:40.441344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:58:40.502762 systemd[1]: Reloading finished in 262 ms. Apr 30 12:58:40.514034 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:58:40.523952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:58:40.537566 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:58:40.542320 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:58:40.547266 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:58:40.553510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:58:40.557820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:58:40.570386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:58:40.575630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:58:40.577972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:58:40.582083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:58:40.587432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:58:40.588511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:58:40.588636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:58:40.601383 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:58:40.608906 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:58:40.620624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:58:40.621557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:58:40.621642 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:58:40.630333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:58:40.632915 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:58:40.634760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:58:40.634953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:58:40.641808 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Apr 30 12:58:40.656965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:58:40.658100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:58:40.660698 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:58:40.661456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:58:40.665995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:58:40.673176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:58:40.675961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:58:40.677057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:58:40.677198 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:58:40.677308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:58:40.680684 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:58:40.687244 systemd[1]: Finished ensure-sysext.service. Apr 30 12:58:40.698243 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 12:58:40.699234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:58:40.699421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:58:40.700610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:58:40.705664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:58:40.717335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:58:40.723203 augenrules[1381]: No rules Apr 30 12:58:40.724416 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:58:40.724825 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:58:40.729342 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:58:40.729553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:58:40.740195 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:58:40.741891 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:58:40.749104 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:58:40.877144 systemd-networkd[1378]: lo: Link UP Apr 30 12:58:40.877154 systemd-networkd[1378]: lo: Gained carrier Apr 30 12:58:40.880439 systemd-resolved[1342]: Positive Trust Anchors: Apr 30 12:58:40.880473 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:58:40.880506 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:58:40.880876 systemd-networkd[1378]: Enumeration completed Apr 30 12:58:40.881001 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:58:40.889253 systemd-resolved[1342]: Using system hostname 'ci-4230-1-1-f-bd31e1b44e'. Apr 30 12:58:40.898302 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:58:40.904257 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:58:40.907313 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 12:58:40.908076 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:58:40.908932 systemd[1]: Reached target network.target - Network. Apr 30 12:58:40.909462 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:58:40.911103 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:58:40.930675 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 12:58:40.934454 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:58:40.972048 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:58:40.987692 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:40.987834 systemd-networkd[1378]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:58:40.988611 systemd-networkd[1378]: eth1: Link UP Apr 30 12:58:40.988711 systemd-networkd[1378]: eth1: Gained carrier Apr 30 12:58:40.988795 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:40.996598 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:40.996706 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:58:40.998167 systemd-networkd[1378]: eth0: Link UP Apr 30 12:58:40.998367 systemd-networkd[1378]: eth0: Gained carrier Apr 30 12:58:40.998480 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:58:41.013046 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1397) Apr 30 12:58:41.017170 systemd-networkd[1378]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:58:41.018345 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Apr 30 12:58:41.056193 systemd-networkd[1378]: eth0: DHCPv4 address 91.99.82.124/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:58:41.057186 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Apr 30 12:58:41.057834 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Apr 30 12:58:41.097867 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 12:58:41.102121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:58:41.103533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:58:41.112330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:58:41.117459 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:58:41.124552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:58:41.126193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:58:41.131298 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:58:41.131955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:58:41.131990 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:58:41.132693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:58:41.135092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:58:41.136091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:58:41.136250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:58:41.138197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:58:41.161660 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 12:58:41.161717 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 12:58:41.161754 kernel: [drm] features: -context_init Apr 30 12:58:41.161789 kernel: [drm] number of scanouts: 1 Apr 30 12:58:41.157176 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:58:41.160225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:58:41.161612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:58:41.163049 kernel: [drm] number of cap sets: 0 Apr 30 12:58:41.174653 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 12:58:41.180623 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 12:58:41.185116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:58:41.197050 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 12:58:41.197338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:58:41.221887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:58:41.223167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:58:41.227257 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:58:41.234378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:58:41.304570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:58:41.360043 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:58:41.370316 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:58:41.383032 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:58:41.410097 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:58:41.412479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:58:41.413497 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:58:41.414327 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:58:41.415051 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:58:41.415896 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:58:41.416818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:58:41.417580 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:58:41.418263 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:58:41.418301 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:58:41.418779 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:58:41.421253 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:58:41.425621 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:58:41.428984 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:58:41.429957 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:58:41.430695 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:58:41.433705 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:58:41.434861 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:58:41.437217 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:58:41.439805 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:58:41.440721 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:58:41.441494 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:58:41.442257 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:58:41.442292 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:58:41.452440 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:58:41.458241 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:58:41.461130 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:58:41.467295 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:58:41.473230 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:58:41.480193 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:58:41.480805 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:58:41.482119 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:58:41.487564 jq[1468]: false Apr 30 12:58:41.492295 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:58:41.494203 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 12:58:41.498819 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:58:41.505185 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:58:41.511266 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:58:41.513399 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:58:41.514744 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:58:41.517392 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:58:41.522093 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:58:41.525340 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:58:41.527334 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:58:41.527748 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:58:41.552412 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:58:41.552660 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:58:41.558560 extend-filesystems[1469]: Found loop4 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found loop5 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found loop6 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found loop7 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda1 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda2 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda3 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found usr Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda4 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda6 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda7 Apr 30 12:58:41.587656 extend-filesystems[1469]: Found sda9 Apr 30 12:58:41.587656 extend-filesystems[1469]: Checking size of /dev/sda9 Apr 30 12:58:41.634582 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 12:58:41.634673 jq[1481]: true Apr 30 12:58:41.583720 dbus-daemon[1467]: [system] SELinux support is enabled Apr 30 12:58:41.583949 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:58:41.638294 extend-filesystems[1469]: Resized partition /dev/sda9 Apr 30 12:58:41.643111 update_engine[1479]: I20250430 12:58:41.606187 1479 main.cc:92] Flatcar Update Engine starting Apr 30 12:58:41.643111 update_engine[1479]: I20250430 12:58:41.625267 1479 update_check_scheduler.cc:74] Next update check in 5m43s Apr 30 12:58:41.643793 coreos-metadata[1466]: Apr 30 12:58:41.606 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 12:58:41.643793 coreos-metadata[1466]: Apr 30 12:58:41.610 INFO Fetch successful Apr 30 12:58:41.643793 coreos-metadata[1466]: Apr 30 12:58:41.610 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 12:58:41.643793 coreos-metadata[1466]: Apr 30 12:58:41.610 INFO Fetch successful Apr 30 12:58:41.595387 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:58:41.645714 extend-filesystems[1506]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:58:41.595618 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:58:41.657842 tar[1484]: linux-arm64/helm Apr 30 12:58:41.597009 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:58:41.658237 jq[1500]: true Apr 30 12:58:41.598182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:58:41.599926 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:58:41.599947 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:58:41.625434 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:58:41.633178 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:58:41.645415 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:58:41.733068 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:58:41.735887 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:58:41.772395 systemd-logind[1477]: New seat seat0. Apr 30 12:58:41.777071 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:58:41.778057 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 12:58:41.783063 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:58:41.804631 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 12:58:41.804659 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 12:58:41.806579 extend-filesystems[1506]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 12:58:41.806579 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 12:58:41.806579 extend-filesystems[1506]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 12:58:41.811250 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Apr 30 12:58:41.811250 extend-filesystems[1469]: Found sr0 Apr 30 12:58:41.825075 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1377) Apr 30 12:58:41.825479 systemd[1]: Starting sshkeys.service... Apr 30 12:58:41.826209 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:58:41.828189 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:58:41.828421 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:58:41.859355 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:58:41.871358 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:58:41.949663 containerd[1503]: time="2025-04-30T12:58:41.949540080Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:58:41.974533 coreos-metadata[1545]: Apr 30 12:58:41.973 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 12:58:41.978740 coreos-metadata[1545]: Apr 30 12:58:41.975 INFO Fetch successful Apr 30 12:58:41.982876 unknown[1545]: wrote ssh authorized keys file for user: core Apr 30 12:58:42.013626 update-ssh-keys[1554]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:58:42.016608 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:58:42.023426 systemd[1]: Finished sshkeys.service. Apr 30 12:58:42.025994 containerd[1503]: time="2025-04-30T12:58:42.025951960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.030847440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.030896040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.030915760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031105640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031125200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031191360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031202800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031411000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031425640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031438800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032023 containerd[1503]: time="2025-04-30T12:58:42.031447800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032285 containerd[1503]: time="2025-04-30T12:58:42.031517920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032285 containerd[1503]: time="2025-04-30T12:58:42.031706240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032285 containerd[1503]: time="2025-04-30T12:58:42.031892720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:58:42.032285 containerd[1503]: time="2025-04-30T12:58:42.031912920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:58:42.032285 containerd[1503]: time="2025-04-30T12:58:42.032002320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:58:42.032285 containerd[1503]: time="2025-04-30T12:58:42.032071000Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:58:42.041256 containerd[1503]: time="2025-04-30T12:58:42.041126280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:58:42.041256 containerd[1503]: time="2025-04-30T12:58:42.041189200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:58:42.041256 containerd[1503]: time="2025-04-30T12:58:42.041206000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:58:42.041256 containerd[1503]: time="2025-04-30T12:58:42.041222800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:58:42.041256 containerd[1503]: time="2025-04-30T12:58:42.041237640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:58:42.041627 containerd[1503]: time="2025-04-30T12:58:42.041419600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:58:42.041739 containerd[1503]: time="2025-04-30T12:58:42.041646280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041741600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041796400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041813760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041828480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041840960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041853920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041866760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041881320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.041895 containerd[1503]: time="2025-04-30T12:58:42.041896360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041910640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041923000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041949320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041962920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041975360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041987960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.041999240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042011000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042038360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042052280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042064560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042078600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042089960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042232 containerd[1503]: time="2025-04-30T12:58:42.042101760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042114520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042127960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042149040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042161720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042252320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042431080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042452680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042462840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042475040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042483640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042499680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042509880Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:58:42.042828 containerd[1503]: time="2025-04-30T12:58:42.042519800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:58:42.050046 containerd[1503]: time="2025-04-30T12:58:42.042981680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:58:42.050046 containerd[1503]: time="2025-04-30T12:58:42.047132400Z" level=info msg="Connect containerd service" Apr 30 12:58:42.050046 containerd[1503]: time="2025-04-30T12:58:42.047193560Z" level=info msg="using legacy CRI server" Apr 30 12:58:42.050046 containerd[1503]: time="2025-04-30T12:58:42.047202120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:58:42.050046 containerd[1503]: time="2025-04-30T12:58:42.047523800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:58:42.051133 containerd[1503]: time="2025-04-30T12:58:42.051098840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:58:42.051811 containerd[1503]: time="2025-04-30T12:58:42.051723680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.053957160Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.054404960Z" level=info msg="Start subscribing containerd event" Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.054470320Z" level=info msg="Start recovering state" Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.054546440Z" level=info msg="Start event monitor" Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.054560200Z" level=info msg="Start snapshots syncer" Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.054569840Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:58:42.055048 containerd[1503]: time="2025-04-30T12:58:42.054576680Z" level=info msg="Start streaming server" Apr 30 12:58:42.056032 containerd[1503]: time="2025-04-30T12:58:42.055097640Z" level=info msg="containerd successfully booted in 0.107020s" Apr 30 12:58:42.055189 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:58:42.110726 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:58:42.303927 tar[1484]: linux-arm64/LICENSE Apr 30 12:58:42.304025 tar[1484]: linux-arm64/README.md Apr 30 12:58:42.316700 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:58:42.584228 systemd-networkd[1378]: eth0: Gained IPv6LL Apr 30 12:58:42.585133 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Apr 30 12:58:42.591901 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:58:42.594339 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:58:42.602124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:58:42.604037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:58:42.656353 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:58:43.032425 systemd-networkd[1378]: eth1: Gained IPv6LL Apr 30 12:58:43.033607 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Apr 30 12:58:43.243493 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:58:43.269236 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:58:43.277354 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:58:43.290318 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:58:43.293097 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:58:43.301379 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:58:43.305241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:58:43.309603 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:58:43.319328 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:58:43.330469 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:58:43.332913 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 12:58:43.334086 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:58:43.335033 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:58:43.336920 systemd[1]: Startup finished in 848ms (kernel) + 5.917s (initrd) + 4.771s (userspace) = 11.536s. Apr 30 12:58:43.884915 kubelet[1593]: E0430 12:58:43.884842 1593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:58:43.889440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:58:43.889768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:58:43.890634 systemd[1]: kubelet.service: Consumed 852ms CPU time, 239.1M memory peak. Apr 30 12:58:54.140601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:58:54.148360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:58:54.245927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:58:54.250638 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:58:54.297506 kubelet[1618]: E0430 12:58:54.297437 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:58:54.301316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:58:54.301512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:58:54.302134 systemd[1]: kubelet.service: Consumed 142ms CPU time, 96.7M memory peak. Apr 30 12:59:04.552632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:59:04.559427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:59:04.679517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:59:04.688624 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:59:04.739514 kubelet[1634]: E0430 12:59:04.739407 1634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:59:04.742765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:59:04.742935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:59:04.743615 systemd[1]: kubelet.service: Consumed 153ms CPU time, 94.8M memory peak. Apr 30 12:59:13.333085 systemd-timesyncd[1374]: Contacted time server 80.153.195.191:123 (2.flatcar.pool.ntp.org). Apr 30 12:59:13.333161 systemd-timesyncd[1374]: Initial clock synchronization to Wed 2025-04-30 12:59:13.614087 UTC. Apr 30 12:59:14.911230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:59:14.924839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:59:15.030516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:59:15.035546 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:59:15.096706 kubelet[1650]: E0430 12:59:15.096602 1650 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:59:15.100217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:59:15.100354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:59:15.100939 systemd[1]: kubelet.service: Consumed 152ms CPU time, 96.8M memory peak. Apr 30 12:59:25.160132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 12:59:25.168648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:59:25.293306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:59:25.294813 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:59:25.344619 kubelet[1666]: E0430 12:59:25.344530 1666 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:59:25.346866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:59:25.347055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:59:25.347521 systemd[1]: kubelet.service: Consumed 149ms CPU time, 94.3M memory peak. Apr 30 12:59:26.845975 update_engine[1479]: I20250430 12:59:26.845822 1479 update_attempter.cc:509] Updating boot flags... Apr 30 12:59:26.901048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1683) Apr 30 12:59:26.968111 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1685) Apr 30 12:59:27.037100 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1685) Apr 30 12:59:35.410181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 12:59:35.419403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:59:35.524244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:59:35.529221 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:59:35.578964 kubelet[1703]: E0430 12:59:35.578902 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:59:35.581862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:59:35.582049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:59:35.582870 systemd[1]: kubelet.service: Consumed 147ms CPU time, 94.1M memory peak. Apr 30 12:59:45.660081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 12:59:45.667440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:59:45.796277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:59:45.798488 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:59:45.851955 kubelet[1719]: E0430 12:59:45.851885 1719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:59:45.855448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:59:45.855841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:59:45.856483 systemd[1]: kubelet.service: Consumed 151ms CPU time, 94.6M memory peak. Apr 30 12:59:55.909917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 12:59:55.916415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:59:56.029267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:59:56.032202 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:59:56.083539 kubelet[1736]: E0430 12:59:56.083490 1736 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:59:56.087250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:59:56.087445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:59:56.089150 systemd[1]: kubelet.service: Consumed 148ms CPU time, 96.8M memory peak. Apr 30 13:00:06.159765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 13:00:06.177387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:06.310318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:06.312105 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:00:06.362513 kubelet[1752]: E0430 13:00:06.362469 1752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:00:06.365721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:00:06.365955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:00:06.366754 systemd[1]: kubelet.service: Consumed 153ms CPU time, 94.3M memory peak. Apr 30 13:00:16.410042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 13:00:16.422396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:16.548870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:16.562734 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:00:16.617808 kubelet[1768]: E0430 13:00:16.617742 1768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:00:16.622363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:00:16.623050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:00:16.623701 systemd[1]: kubelet.service: Consumed 161ms CPU time, 96M memory peak. Apr 30 13:00:23.059192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 13:00:23.065469 systemd[1]: Started sshd@0-91.99.82.124:22-139.178.89.65:40718.service - OpenSSH per-connection server daemon (139.178.89.65:40718). Apr 30 13:00:24.072465 sshd[1777]: Accepted publickey for core from 139.178.89.65 port 40718 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:24.074684 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:24.088355 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 13:00:24.097392 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 13:00:24.108843 systemd-logind[1477]: New session 1 of user core. Apr 30 13:00:24.117132 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 13:00:24.124386 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 13:00:24.128315 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 13:00:24.130924 systemd-logind[1477]: New session c1 of user core. Apr 30 13:00:24.259617 systemd[1781]: Queued start job for default target default.target. Apr 30 13:00:24.272267 systemd[1781]: Created slice app.slice - User Application Slice. Apr 30 13:00:24.272336 systemd[1781]: Reached target paths.target - Paths. Apr 30 13:00:24.272414 systemd[1781]: Reached target timers.target - Timers. Apr 30 13:00:24.275221 systemd[1781]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 13:00:24.289301 systemd[1781]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 13:00:24.289649 systemd[1781]: Reached target sockets.target - Sockets. Apr 30 13:00:24.289739 systemd[1781]: Reached target basic.target - Basic System. Apr 30 13:00:24.289811 systemd[1781]: Reached target default.target - Main User Target. Apr 30 13:00:24.289859 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 13:00:24.289863 systemd[1781]: Startup finished in 152ms. Apr 30 13:00:24.297270 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 13:00:25.005210 systemd[1]: Started sshd@1-91.99.82.124:22-139.178.89.65:40726.service - OpenSSH per-connection server daemon (139.178.89.65:40726). Apr 30 13:00:25.988256 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 40726 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:25.990612 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:25.999535 systemd-logind[1477]: New session 2 of user core. Apr 30 13:00:26.009396 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 13:00:26.659651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 13:00:26.674086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:26.674971 sshd[1794]: Connection closed by 139.178.89.65 port 40726 Apr 30 13:00:26.673446 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Apr 30 13:00:26.680651 systemd[1]: sshd@1-91.99.82.124:22-139.178.89.65:40726.service: Deactivated successfully. Apr 30 13:00:26.685747 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 13:00:26.687153 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Apr 30 13:00:26.689895 systemd-logind[1477]: Removed session 2. Apr 30 13:00:26.784182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:26.795746 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:00:26.853958 kubelet[1807]: E0430 13:00:26.853900 1807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:00:26.858321 systemd[1]: Started sshd@2-91.99.82.124:22-139.178.89.65:49012.service - OpenSSH per-connection server daemon (139.178.89.65:49012). Apr 30 13:00:26.859850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:00:26.860008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:00:26.860514 systemd[1]: kubelet.service: Consumed 157ms CPU time, 94.1M memory peak. Apr 30 13:00:27.851065 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 49012 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:27.854430 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:27.859995 systemd-logind[1477]: New session 3 of user core. Apr 30 13:00:27.867393 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 13:00:28.535723 sshd[1818]: Connection closed by 139.178.89.65 port 49012 Apr 30 13:00:28.536720 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Apr 30 13:00:28.542626 systemd[1]: sshd@2-91.99.82.124:22-139.178.89.65:49012.service: Deactivated successfully. Apr 30 13:00:28.546080 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 13:00:28.546939 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Apr 30 13:00:28.549813 systemd-logind[1477]: Removed session 3. Apr 30 13:00:28.713344 systemd[1]: Started sshd@3-91.99.82.124:22-139.178.89.65:49016.service - OpenSSH per-connection server daemon (139.178.89.65:49016). Apr 30 13:00:29.690295 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 49016 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:29.692414 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:29.697696 systemd-logind[1477]: New session 4 of user core. Apr 30 13:00:29.711361 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 13:00:30.365801 sshd[1826]: Connection closed by 139.178.89.65 port 49016 Apr 30 13:00:30.366831 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Apr 30 13:00:30.372699 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Apr 30 13:00:30.373931 systemd[1]: sshd@3-91.99.82.124:22-139.178.89.65:49016.service: Deactivated successfully. Apr 30 13:00:30.375773 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 13:00:30.377090 systemd-logind[1477]: Removed session 4. Apr 30 13:00:30.543585 systemd[1]: Started sshd@4-91.99.82.124:22-139.178.89.65:49030.service - OpenSSH per-connection server daemon (139.178.89.65:49030). Apr 30 13:00:31.522553 sshd[1832]: Accepted publickey for core from 139.178.89.65 port 49030 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:31.524778 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:31.531726 systemd-logind[1477]: New session 5 of user core. Apr 30 13:00:31.538227 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 13:00:32.050758 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 13:00:32.051514 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:00:32.069964 sudo[1835]: pam_unix(sudo:session): session closed for user root Apr 30 13:00:32.228648 sshd[1834]: Connection closed by 139.178.89.65 port 49030 Apr 30 13:00:32.230109 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Apr 30 13:00:32.236206 systemd[1]: sshd@4-91.99.82.124:22-139.178.89.65:49030.service: Deactivated successfully. Apr 30 13:00:32.239005 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 13:00:32.239999 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Apr 30 13:00:32.241708 systemd-logind[1477]: Removed session 5. Apr 30 13:00:32.405499 systemd[1]: Started sshd@5-91.99.82.124:22-139.178.89.65:49032.service - OpenSSH per-connection server daemon (139.178.89.65:49032). Apr 30 13:00:33.393448 sshd[1841]: Accepted publickey for core from 139.178.89.65 port 49032 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:33.396004 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:33.401994 systemd-logind[1477]: New session 6 of user core. Apr 30 13:00:33.410349 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 13:00:33.922001 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 13:00:33.922392 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:00:33.927187 sudo[1845]: pam_unix(sudo:session): session closed for user root Apr 30 13:00:33.933596 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 13:00:33.933967 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:00:33.950295 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:00:33.989507 augenrules[1867]: No rules Apr 30 13:00:33.991382 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:00:33.991618 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:00:33.992690 sudo[1844]: pam_unix(sudo:session): session closed for user root Apr 30 13:00:34.152933 sshd[1843]: Connection closed by 139.178.89.65 port 49032 Apr 30 13:00:34.153912 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Apr 30 13:00:34.160845 systemd[1]: sshd@5-91.99.82.124:22-139.178.89.65:49032.service: Deactivated successfully. Apr 30 13:00:34.164360 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 13:00:34.166007 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Apr 30 13:00:34.167401 systemd-logind[1477]: Removed session 6. Apr 30 13:00:34.333506 systemd[1]: Started sshd@6-91.99.82.124:22-139.178.89.65:49048.service - OpenSSH per-connection server daemon (139.178.89.65:49048). Apr 30 13:00:35.330889 sshd[1876]: Accepted publickey for core from 139.178.89.65 port 49048 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:00:35.333089 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:00:35.339345 systemd-logind[1477]: New session 7 of user core. Apr 30 13:00:35.347369 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 13:00:35.861255 sudo[1879]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 13:00:35.861564 sudo[1879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:00:36.199504 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 13:00:36.199536 (dockerd)[1896]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 13:00:36.440224 dockerd[1896]: time="2025-04-30T13:00:36.440151910Z" level=info msg="Starting up" Apr 30 13:00:36.543367 dockerd[1896]: time="2025-04-30T13:00:36.542936181Z" level=info msg="Loading containers: start." Apr 30 13:00:36.716072 kernel: Initializing XFRM netlink socket Apr 30 13:00:36.808268 systemd-networkd[1378]: docker0: Link UP Apr 30 13:00:36.844691 dockerd[1896]: time="2025-04-30T13:00:36.844593830Z" level=info msg="Loading containers: done." Apr 30 13:00:36.862097 dockerd[1896]: time="2025-04-30T13:00:36.861452234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 13:00:36.862097 dockerd[1896]: time="2025-04-30T13:00:36.861578967Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 13:00:36.862097 dockerd[1896]: time="2025-04-30T13:00:36.861771667Z" level=info msg="Daemon has completed initialization" Apr 30 13:00:36.898560 dockerd[1896]: time="2025-04-30T13:00:36.898502383Z" level=info msg="API listen on /run/docker.sock" Apr 30 13:00:36.898769 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 13:00:36.900628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 13:00:36.910600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:37.015247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:37.020253 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:00:37.103053 kubelet[2092]: E0430 13:00:37.102775 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:00:37.108464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:00:37.108618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:00:37.108990 systemd[1]: kubelet.service: Consumed 146ms CPU time, 93.9M memory peak. Apr 30 13:00:38.075990 containerd[1503]: time="2025-04-30T13:00:38.075835698Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 13:00:38.720839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986043300.mount: Deactivated successfully. Apr 30 13:00:39.650576 containerd[1503]: time="2025-04-30T13:00:39.650523875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:39.651851 containerd[1503]: time="2025-04-30T13:00:39.651795519Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" Apr 30 13:00:39.652717 containerd[1503]: time="2025-04-30T13:00:39.652624680Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:39.661042 containerd[1503]: time="2025-04-30T13:00:39.660312913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:39.662034 containerd[1503]: time="2025-04-30T13:00:39.661966595Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.585970801s" Apr 30 13:00:39.662197 containerd[1503]: time="2025-04-30T13:00:39.662177776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 13:00:39.683342 containerd[1503]: time="2025-04-30T13:00:39.683301284Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 13:00:41.037456 containerd[1503]: time="2025-04-30T13:00:41.037392488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:41.039068 containerd[1503]: time="2025-04-30T13:00:41.038726175Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" Apr 30 13:00:41.040136 containerd[1503]: time="2025-04-30T13:00:41.040077944Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:41.044007 containerd[1503]: time="2025-04-30T13:00:41.043954994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:41.045843 containerd[1503]: time="2025-04-30T13:00:41.045655197Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.362155213s" Apr 30 13:00:41.045843 containerd[1503]: time="2025-04-30T13:00:41.045713602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 13:00:41.070983 containerd[1503]: time="2025-04-30T13:00:41.070890646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 13:00:42.028920 containerd[1503]: time="2025-04-30T13:00:42.028836309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:42.031695 containerd[1503]: time="2025-04-30T13:00:42.031049037Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" Apr 30 13:00:42.033032 containerd[1503]: time="2025-04-30T13:00:42.032954257Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:42.038953 containerd[1503]: time="2025-04-30T13:00:42.038906419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:42.040675 containerd[1503]: time="2025-04-30T13:00:42.040630541Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 969.680809ms" Apr 30 13:00:42.040827 containerd[1503]: time="2025-04-30T13:00:42.040811078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 13:00:42.067146 containerd[1503]: time="2025-04-30T13:00:42.067109840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 13:00:43.112602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866310516.mount: Deactivated successfully. Apr 30 13:00:43.442433 containerd[1503]: time="2025-04-30T13:00:43.442309933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:43.443459 containerd[1503]: time="2025-04-30T13:00:43.443166373Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" Apr 30 13:00:43.460774 containerd[1503]: time="2025-04-30T13:00:43.460704889Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:43.466393 containerd[1503]: time="2025-04-30T13:00:43.466251207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:43.467477 containerd[1503]: time="2025-04-30T13:00:43.466947112Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.399645334s" Apr 30 13:00:43.467477 containerd[1503]: time="2025-04-30T13:00:43.466991236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 13:00:43.492527 containerd[1503]: time="2025-04-30T13:00:43.492373964Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 13:00:44.090196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134938358.mount: Deactivated successfully. Apr 30 13:00:44.722194 containerd[1503]: time="2025-04-30T13:00:44.722098601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:44.724054 containerd[1503]: time="2025-04-30T13:00:44.723538654Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Apr 30 13:00:44.725366 containerd[1503]: time="2025-04-30T13:00:44.725296016Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:44.729679 containerd[1503]: time="2025-04-30T13:00:44.729619615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:44.731354 containerd[1503]: time="2025-04-30T13:00:44.731206321Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.238785512s" Apr 30 13:00:44.731354 containerd[1503]: time="2025-04-30T13:00:44.731248405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 13:00:44.754508 containerd[1503]: time="2025-04-30T13:00:44.754105476Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 13:00:45.304634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount749062879.mount: Deactivated successfully. Apr 30 13:00:45.313831 containerd[1503]: time="2025-04-30T13:00:45.313760701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:45.315350 containerd[1503]: time="2025-04-30T13:00:45.315185751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Apr 30 13:00:45.316484 containerd[1503]: time="2025-04-30T13:00:45.316427944Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:45.321554 containerd[1503]: time="2025-04-30T13:00:45.321148336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:45.323191 containerd[1503]: time="2025-04-30T13:00:45.323043069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 568.872707ms" Apr 30 13:00:45.323191 containerd[1503]: time="2025-04-30T13:00:45.323090753Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 13:00:45.347047 containerd[1503]: time="2025-04-30T13:00:45.346813482Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 13:00:45.956245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118395534.mount: Deactivated successfully. Apr 30 13:00:47.160968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 13:00:47.167253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:47.300134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:47.305445 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:00:47.366035 kubelet[2303]: E0430 13:00:47.365063 2303 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:00:47.368605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:00:47.368922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:00:47.369490 systemd[1]: kubelet.service: Consumed 153ms CPU time, 94.5M memory peak. Apr 30 13:00:47.657658 containerd[1503]: time="2025-04-30T13:00:47.657492381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:47.659313 containerd[1503]: time="2025-04-30T13:00:47.659043720Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Apr 30 13:00:47.660335 containerd[1503]: time="2025-04-30T13:00:47.660265110Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:47.664517 containerd[1503]: time="2025-04-30T13:00:47.664456686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:00:47.666721 containerd[1503]: time="2025-04-30T13:00:47.666504550Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.319651425s" Apr 30 13:00:47.666721 containerd[1503]: time="2025-04-30T13:00:47.666562595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 13:00:52.845470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:52.846362 systemd[1]: kubelet.service: Consumed 153ms CPU time, 94.5M memory peak. Apr 30 13:00:52.860580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:52.898815 systemd[1]: Reload requested from client PID 2374 ('systemctl') (unit session-7.scope)... Apr 30 13:00:52.898971 systemd[1]: Reloading... Apr 30 13:00:53.059056 zram_generator::config[2437]: No configuration found. Apr 30 13:00:53.138130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:00:53.233214 systemd[1]: Reloading finished in 333 ms. Apr 30 13:00:53.286102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:53.294418 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:00:53.295477 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:53.296863 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:00:53.297206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:53.297264 systemd[1]: kubelet.service: Consumed 90ms CPU time, 82.3M memory peak. Apr 30 13:00:53.304417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:00:53.409709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:00:53.423994 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:00:53.475965 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:00:53.475965 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:00:53.475965 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:00:53.476427 kubelet[2470]: I0430 13:00:53.476080 2470 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:00:54.100676 kubelet[2470]: I0430 13:00:54.100588 2470 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 13:00:54.100676 kubelet[2470]: I0430 13:00:54.100646 2470 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:00:54.101098 kubelet[2470]: I0430 13:00:54.101068 2470 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 13:00:54.118232 kubelet[2470]: E0430 13:00:54.118200 2470 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://91.99.82.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.118741 kubelet[2470]: I0430 13:00:54.118589 2470 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:00:54.128631 kubelet[2470]: I0430 13:00:54.128589 2470 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:00:54.130265 kubelet[2470]: I0430 13:00:54.130205 2470 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:00:54.130727 kubelet[2470]: I0430 13:00:54.130270 2470 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-f-bd31e1b44e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 13:00:54.130727 kubelet[2470]: I0430 13:00:54.130704 2470 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:00:54.130727 kubelet[2470]: I0430 13:00:54.130716 2470 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 13:00:54.131466 kubelet[2470]: I0430 13:00:54.131065 2470 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:00:54.132815 kubelet[2470]: I0430 13:00:54.132780 2470 kubelet.go:400] "Attempting to sync node with API server" Apr 30 13:00:54.132815 kubelet[2470]: I0430 13:00:54.132818 2470 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:00:54.133066 kubelet[2470]: I0430 13:00:54.133050 2470 kubelet.go:312] "Adding apiserver pod source" Apr 30 13:00:54.133248 kubelet[2470]: I0430 13:00:54.133236 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:00:54.135641 kubelet[2470]: W0430 13:00:54.135581 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-f-bd31e1b44e&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.137050 kubelet[2470]: E0430 13:00:54.136392 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-f-bd31e1b44e&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.137050 kubelet[2470]: I0430 13:00:54.136539 2470 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:00:54.137050 kubelet[2470]: I0430 13:00:54.136898 2470 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:00:54.137050 kubelet[2470]: W0430 13:00:54.137002 2470 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 13:00:54.139048 kubelet[2470]: I0430 13:00:54.138083 2470 server.go:1264] "Started kubelet" Apr 30 13:00:54.141628 kubelet[2470]: W0430 13:00:54.141569 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.82.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.141764 kubelet[2470]: E0430 13:00:54.141752 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.99.82.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.142185 kubelet[2470]: E0430 13:00:54.141939 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.82.124:6443/api/v1/namespaces/default/events\": dial tcp 91.99.82.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-f-bd31e1b44e.183b1a2b60515339 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-f-bd31e1b44e,UID:ci-4230-1-1-f-bd31e1b44e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-f-bd31e1b44e,},FirstTimestamp:2025-04-30 13:00:54.138057529 +0000 UTC m=+0.708099941,LastTimestamp:2025-04-30 13:00:54.138057529 +0000 UTC m=+0.708099941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-f-bd31e1b44e,}" Apr 30 13:00:54.142460 kubelet[2470]: I0430 13:00:54.142432 2470 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:00:54.142668 kubelet[2470]: I0430 13:00:54.142584 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:00:54.142975 kubelet[2470]: I0430 13:00:54.142943 2470 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:00:54.143905 kubelet[2470]: I0430 13:00:54.143886 2470 server.go:455] "Adding debug handlers to kubelet server" Apr 30 13:00:54.147356 kubelet[2470]: E0430 13:00:54.147328 2470 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:00:54.147610 kubelet[2470]: I0430 13:00:54.147592 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:00:54.147815 kubelet[2470]: I0430 13:00:54.147792 2470 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 13:00:54.150380 kubelet[2470]: I0430 13:00:54.150350 2470 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 13:00:54.150471 kubelet[2470]: I0430 13:00:54.150430 2470 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:00:54.151104 kubelet[2470]: W0430 13:00:54.151054 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.151187 kubelet[2470]: E0430 13:00:54.151119 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.151187 kubelet[2470]: E0430 13:00:54.151150 2470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-1-1-f-bd31e1b44e\" not found" Apr 30 13:00:54.152039 kubelet[2470]: E0430 13:00:54.151643 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-f-bd31e1b44e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="200ms" Apr 30 13:00:54.152964 kubelet[2470]: I0430 13:00:54.152940 2470 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:00:54.153406 kubelet[2470]: I0430 13:00:54.153250 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:00:54.155048 kubelet[2470]: I0430 13:00:54.154865 2470 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:00:54.165108 kubelet[2470]: I0430 13:00:54.165053 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:00:54.166205 kubelet[2470]: I0430 13:00:54.166171 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:00:54.166205 kubelet[2470]: I0430 13:00:54.166211 2470 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:00:54.166323 kubelet[2470]: I0430 13:00:54.166237 2470 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 13:00:54.166323 kubelet[2470]: E0430 13:00:54.166281 2470 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:00:54.175733 kubelet[2470]: W0430 13:00:54.175420 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.175733 kubelet[2470]: E0430 13:00:54.175498 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:54.183321 kubelet[2470]: I0430 13:00:54.183209 2470 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:00:54.183321 kubelet[2470]: I0430 13:00:54.183227 2470 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:00:54.183321 kubelet[2470]: I0430 13:00:54.183248 2470 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:00:54.185214 kubelet[2470]: I0430 13:00:54.185171 2470 policy_none.go:49] "None policy: Start" Apr 30 13:00:54.186052 kubelet[2470]: I0430 13:00:54.185992 2470 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:00:54.186240 kubelet[2470]: I0430 13:00:54.186225 2470 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:00:54.193459 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 13:00:54.211273 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 13:00:54.215432 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 13:00:54.223367 kubelet[2470]: I0430 13:00:54.223163 2470 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:00:54.223527 kubelet[2470]: I0430 13:00:54.223382 2470 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:00:54.223527 kubelet[2470]: I0430 13:00:54.223481 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:00:54.226423 kubelet[2470]: E0430 13:00:54.226307 2470 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-f-bd31e1b44e\" not found" Apr 30 13:00:54.254195 kubelet[2470]: I0430 13:00:54.254152 2470 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.254662 kubelet[2470]: E0430 13:00:54.254571 2470 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.266999 kubelet[2470]: I0430 13:00:54.266919 2470 topology_manager.go:215] "Topology Admit Handler" podUID="2a1cc69b1fc92b14b8afab2c358433fd" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.270391 kubelet[2470]: I0430 13:00:54.270316 2470 topology_manager.go:215] "Topology Admit Handler" podUID="f6c9a4de2476976a7786c461fc3d1c1f" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.272659 kubelet[2470]: I0430 13:00:54.272235 2470 topology_manager.go:215] "Topology Admit Handler" podUID="f35cdd210154447b596bbc2b3bc86295" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.281562 systemd[1]: Created slice kubepods-burstable-pod2a1cc69b1fc92b14b8afab2c358433fd.slice - libcontainer container kubepods-burstable-pod2a1cc69b1fc92b14b8afab2c358433fd.slice. Apr 30 13:00:54.313231 systemd[1]: Created slice kubepods-burstable-podf6c9a4de2476976a7786c461fc3d1c1f.slice - libcontainer container kubepods-burstable-podf6c9a4de2476976a7786c461fc3d1c1f.slice. Apr 30 13:00:54.319209 systemd[1]: Created slice kubepods-burstable-podf35cdd210154447b596bbc2b3bc86295.slice - libcontainer container kubepods-burstable-podf35cdd210154447b596bbc2b3bc86295.slice. Apr 30 13:00:54.352475 kubelet[2470]: I0430 13:00:54.351922 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352475 kubelet[2470]: I0430 13:00:54.351989 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352475 kubelet[2470]: I0430 13:00:54.352059 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6c9a4de2476976a7786c461fc3d1c1f-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f6c9a4de2476976a7786c461fc3d1c1f\") " pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352475 kubelet[2470]: I0430 13:00:54.352093 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6c9a4de2476976a7786c461fc3d1c1f-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f6c9a4de2476976a7786c461fc3d1c1f\") " pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352475 kubelet[2470]: I0430 13:00:54.352125 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6c9a4de2476976a7786c461fc3d1c1f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f6c9a4de2476976a7786c461fc3d1c1f\") " pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352827 kubelet[2470]: I0430 13:00:54.352160 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352827 kubelet[2470]: I0430 13:00:54.352195 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352827 kubelet[2470]: E0430 13:00:54.352206 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-f-bd31e1b44e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="400ms" Apr 30 13:00:54.352827 kubelet[2470]: I0430 13:00:54.352226 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.352827 kubelet[2470]: I0430 13:00:54.352304 2470 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a1cc69b1fc92b14b8afab2c358433fd-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-f-bd31e1b44e\" (UID: \"2a1cc69b1fc92b14b8afab2c358433fd\") " pod="kube-system/kube-scheduler-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.457714 kubelet[2470]: I0430 13:00:54.457203 2470 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.457714 kubelet[2470]: E0430 13:00:54.457662 2470 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.610373 containerd[1503]: time="2025-04-30T13:00:54.610108564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-f-bd31e1b44e,Uid:2a1cc69b1fc92b14b8afab2c358433fd,Namespace:kube-system,Attempt:0,}" Apr 30 13:00:54.618798 containerd[1503]: time="2025-04-30T13:00:54.618316077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-f-bd31e1b44e,Uid:f6c9a4de2476976a7786c461fc3d1c1f,Namespace:kube-system,Attempt:0,}" Apr 30 13:00:54.623097 containerd[1503]: time="2025-04-30T13:00:54.623041156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-f-bd31e1b44e,Uid:f35cdd210154447b596bbc2b3bc86295,Namespace:kube-system,Attempt:0,}" Apr 30 13:00:54.753762 kubelet[2470]: E0430 13:00:54.753663 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-f-bd31e1b44e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="800ms" Apr 30 13:00:54.861296 kubelet[2470]: I0430 13:00:54.861154 2470 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:54.861745 kubelet[2470]: E0430 13:00:54.861694 2470 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:55.086652 kubelet[2470]: W0430 13:00:55.086377 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.82.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.086652 kubelet[2470]: E0430 13:00:55.086501 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.99.82.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.200245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708419472.mount: Deactivated successfully. Apr 30 13:00:55.213303 containerd[1503]: time="2025-04-30T13:00:55.213235177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:00:55.215299 containerd[1503]: time="2025-04-30T13:00:55.215157750Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 13:00:55.217184 containerd[1503]: time="2025-04-30T13:00:55.217131200Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:00:55.218675 containerd[1503]: time="2025-04-30T13:00:55.218627277Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:00:55.219718 containerd[1503]: time="2025-04-30T13:00:55.219560705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:00:55.220985 containerd[1503]: time="2025-04-30T13:00:55.220901071Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:00:55.221174 containerd[1503]: time="2025-04-30T13:00:55.221129458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:00:55.224857 containerd[1503]: time="2025-04-30T13:00:55.224758696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 614.53618ms" Apr 30 13:00:55.225633 containerd[1503]: time="2025-04-30T13:00:55.225504055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:00:55.227259 containerd[1503]: time="2025-04-30T13:00:55.227124885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.685415ms" Apr 30 13:00:55.227844 containerd[1503]: time="2025-04-30T13:00:55.227785128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.629778ms" Apr 30 13:00:55.299625 kubelet[2470]: W0430 13:00:55.299542 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-f-bd31e1b44e&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.299737 kubelet[2470]: E0430 13:00:55.299638 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-f-bd31e1b44e&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.359945 containerd[1503]: time="2025-04-30T13:00:55.359596082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:00:55.359945 containerd[1503]: time="2025-04-30T13:00:55.359684877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:00:55.359945 containerd[1503]: time="2025-04-30T13:00:55.359701276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:00:55.359945 containerd[1503]: time="2025-04-30T13:00:55.359780192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:00:55.360521 containerd[1503]: time="2025-04-30T13:00:55.359602442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:00:55.360521 containerd[1503]: time="2025-04-30T13:00:55.359678717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:00:55.360521 containerd[1503]: time="2025-04-30T13:00:55.359695796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:00:55.360521 containerd[1503]: time="2025-04-30T13:00:55.359779872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:00:55.369577 containerd[1503]: time="2025-04-30T13:00:55.369470373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:00:55.370320 containerd[1503]: time="2025-04-30T13:00:55.370238130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:00:55.370446 containerd[1503]: time="2025-04-30T13:00:55.370333765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:00:55.371022 containerd[1503]: time="2025-04-30T13:00:55.370542554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:00:55.388723 systemd[1]: Started cri-containerd-0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527.scope - libcontainer container 0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527. Apr 30 13:00:55.394344 systemd[1]: Started cri-containerd-94bcb31e007b692b71bfa090b9b59f660575d22548351351abc66d94eac2d8fa.scope - libcontainer container 94bcb31e007b692b71bfa090b9b59f660575d22548351351abc66d94eac2d8fa. Apr 30 13:00:55.402231 kubelet[2470]: W0430 13:00:55.402129 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.402231 kubelet[2470]: E0430 13:00:55.402204 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.405805 systemd[1]: Started cri-containerd-837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c.scope - libcontainer container 837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c. Apr 30 13:00:55.449340 containerd[1503]: time="2025-04-30T13:00:55.449240380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-f-bd31e1b44e,Uid:2a1cc69b1fc92b14b8afab2c358433fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527\"" Apr 30 13:00:55.455340 kubelet[2470]: W0430 13:00:55.455145 2470 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.455340 kubelet[2470]: E0430 13:00:55.455213 2470 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 13:00:55.463918 containerd[1503]: time="2025-04-30T13:00:55.463646539Z" level=info msg="CreateContainer within sandbox \"0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 13:00:55.481036 containerd[1503]: time="2025-04-30T13:00:55.480901780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-f-bd31e1b44e,Uid:f6c9a4de2476976a7786c461fc3d1c1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"94bcb31e007b692b71bfa090b9b59f660575d22548351351abc66d94eac2d8fa\"" Apr 30 13:00:55.488821 containerd[1503]: time="2025-04-30T13:00:55.488760663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-f-bd31e1b44e,Uid:f35cdd210154447b596bbc2b3bc86295,Namespace:kube-system,Attempt:0,} returns sandbox id \"837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c\"" Apr 30 13:00:55.489454 containerd[1503]: time="2025-04-30T13:00:55.489214998Z" level=info msg="CreateContainer within sandbox \"94bcb31e007b692b71bfa090b9b59f660575d22548351351abc66d94eac2d8fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 13:00:55.493902 containerd[1503]: time="2025-04-30T13:00:55.493862259Z" level=info msg="CreateContainer within sandbox \"837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 13:00:55.497466 containerd[1503]: time="2025-04-30T13:00:55.497401263Z" level=info msg="CreateContainer within sandbox \"0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba\"" Apr 30 13:00:55.498268 containerd[1503]: time="2025-04-30T13:00:55.498164580Z" level=info msg="StartContainer for \"580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba\"" Apr 30 13:00:55.520734 containerd[1503]: time="2025-04-30T13:00:55.520680849Z" level=info msg="CreateContainer within sandbox \"837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9\"" Apr 30 13:00:55.522847 containerd[1503]: time="2025-04-30T13:00:55.521478125Z" level=info msg="StartContainer for \"eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9\"" Apr 30 13:00:55.530211 containerd[1503]: time="2025-04-30T13:00:55.530166282Z" level=info msg="CreateContainer within sandbox \"94bcb31e007b692b71bfa090b9b59f660575d22548351351abc66d94eac2d8fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1cc78abd33da064e166cce705fb1bffb9531b5de33ab7f257f1203331189748b\"" Apr 30 13:00:55.531086 containerd[1503]: time="2025-04-30T13:00:55.531054672Z" level=info msg="StartContainer for \"1cc78abd33da064e166cce705fb1bffb9531b5de33ab7f257f1203331189748b\"" Apr 30 13:00:55.532215 systemd[1]: Started cri-containerd-580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba.scope - libcontainer container 580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba. Apr 30 13:00:55.554930 kubelet[2470]: E0430 13:00:55.554877 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-f-bd31e1b44e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="1.6s" Apr 30 13:00:55.575316 systemd[1]: Started cri-containerd-1cc78abd33da064e166cce705fb1bffb9531b5de33ab7f257f1203331189748b.scope - libcontainer container 1cc78abd33da064e166cce705fb1bffb9531b5de33ab7f257f1203331189748b. Apr 30 13:00:55.578437 systemd[1]: Started cri-containerd-eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9.scope - libcontainer container eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9. Apr 30 13:00:55.597371 containerd[1503]: time="2025-04-30T13:00:55.597317389Z" level=info msg="StartContainer for \"580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba\" returns successfully" Apr 30 13:00:55.637934 containerd[1503]: time="2025-04-30T13:00:55.637874815Z" level=info msg="StartContainer for \"1cc78abd33da064e166cce705fb1bffb9531b5de33ab7f257f1203331189748b\" returns successfully" Apr 30 13:00:55.666615 kubelet[2470]: I0430 13:00:55.665792 2470 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:55.667429 kubelet[2470]: E0430 13:00:55.667215 2470 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:55.670900 containerd[1503]: time="2025-04-30T13:00:55.670841903Z" level=info msg="StartContainer for \"eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9\" returns successfully" Apr 30 13:00:57.270762 kubelet[2470]: I0430 13:00:57.270722 2470 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:58.205520 kubelet[2470]: I0430 13:00:58.205474 2470 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:00:58.235086 kubelet[2470]: E0430 13:00:58.235043 2470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-1-1-f-bd31e1b44e\" not found" Apr 30 13:00:58.335632 kubelet[2470]: E0430 13:00:58.335585 2470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-1-1-f-bd31e1b44e\" not found" Apr 30 13:00:58.436185 kubelet[2470]: E0430 13:00:58.436139 2470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-1-1-f-bd31e1b44e\" not found" Apr 30 13:00:59.145105 kubelet[2470]: I0430 13:00:59.145062 2470 apiserver.go:52] "Watching apiserver" Apr 30 13:00:59.153886 kubelet[2470]: I0430 13:00:59.151621 2470 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 13:01:00.362708 systemd[1]: Reload requested from client PID 2749 ('systemctl') (unit session-7.scope)... Apr 30 13:01:00.362732 systemd[1]: Reloading... Apr 30 13:01:00.494167 zram_generator::config[2800]: No configuration found. Apr 30 13:01:00.598005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:01:00.705730 systemd[1]: Reloading finished in 342 ms. Apr 30 13:01:00.741637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:01:00.746887 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:01:00.748042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:01:00.748375 systemd[1]: kubelet.service: Consumed 1.125s CPU time, 111.2M memory peak. Apr 30 13:01:00.756675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:01:00.891877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:01:00.909139 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:01:00.988183 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:01:00.988686 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:01:00.988736 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:01:00.988880 kubelet[2839]: I0430 13:01:00.988839 2839 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:01:00.997442 kubelet[2839]: I0430 13:01:00.997387 2839 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 13:01:00.997442 kubelet[2839]: I0430 13:01:00.997420 2839 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:01:00.997763 kubelet[2839]: I0430 13:01:00.997610 2839 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 13:01:01.002376 kubelet[2839]: I0430 13:01:01.002350 2839 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 13:01:01.004359 kubelet[2839]: I0430 13:01:01.004132 2839 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:01:01.013313 kubelet[2839]: I0430 13:01:01.013228 2839 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:01:01.014073 kubelet[2839]: I0430 13:01:01.013636 2839 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:01:01.014073 kubelet[2839]: I0430 13:01:01.013670 2839 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-f-bd31e1b44e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 13:01:01.014073 kubelet[2839]: I0430 13:01:01.013846 2839 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:01:01.014073 kubelet[2839]: I0430 13:01:01.013856 2839 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 13:01:01.015402 kubelet[2839]: I0430 13:01:01.013891 2839 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:01:01.015402 kubelet[2839]: I0430 13:01:01.014308 2839 kubelet.go:400] "Attempting to sync node with API server" Apr 30 13:01:01.015402 kubelet[2839]: I0430 13:01:01.014325 2839 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:01:01.015402 kubelet[2839]: I0430 13:01:01.014355 2839 kubelet.go:312] "Adding apiserver pod source" Apr 30 13:01:01.015402 kubelet[2839]: I0430 13:01:01.014368 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:01:01.018069 kubelet[2839]: I0430 13:01:01.016875 2839 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:01:01.018069 kubelet[2839]: I0430 13:01:01.017263 2839 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:01:01.018069 kubelet[2839]: I0430 13:01:01.017966 2839 server.go:1264] "Started kubelet" Apr 30 13:01:01.023074 kubelet[2839]: I0430 13:01:01.021994 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:01:01.025020 kubelet[2839]: I0430 13:01:01.023416 2839 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:01:01.025020 kubelet[2839]: I0430 13:01:01.023626 2839 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:01:01.025404 kubelet[2839]: I0430 13:01:01.025368 2839 server.go:455] "Adding debug handlers to kubelet server" Apr 30 13:01:01.025849 kubelet[2839]: I0430 13:01:01.025834 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:01:01.037362 kubelet[2839]: I0430 13:01:01.037331 2839 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 13:01:01.041515 kubelet[2839]: I0430 13:01:01.041486 2839 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 13:01:01.041780 kubelet[2839]: I0430 13:01:01.041765 2839 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:01:01.051642 kubelet[2839]: I0430 13:01:01.051568 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:01:01.064035 kubelet[2839]: I0430 13:01:01.062585 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:01:01.064614 kubelet[2839]: I0430 13:01:01.064595 2839 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:01:01.064724 kubelet[2839]: I0430 13:01:01.064715 2839 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 13:01:01.064843 kubelet[2839]: E0430 13:01:01.064819 2839 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:01:01.065072 kubelet[2839]: I0430 13:01:01.063729 2839 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:01:01.065293 kubelet[2839]: I0430 13:01:01.065271 2839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:01:01.085172 kubelet[2839]: E0430 13:01:01.085142 2839 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:01:01.087578 kubelet[2839]: I0430 13:01:01.087542 2839 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:01:01.145181 kubelet[2839]: I0430 13:01:01.145130 2839 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.157429 kubelet[2839]: I0430 13:01:01.157398 2839 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:01:01.157429 kubelet[2839]: I0430 13:01:01.157421 2839 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:01:01.157580 kubelet[2839]: I0430 13:01:01.157447 2839 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:01:01.157629 kubelet[2839]: I0430 13:01:01.157612 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 13:01:01.157673 kubelet[2839]: I0430 13:01:01.157628 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 13:01:01.157673 kubelet[2839]: I0430 13:01:01.157648 2839 policy_none.go:49] "None policy: Start" Apr 30 13:01:01.159489 kubelet[2839]: I0430 13:01:01.158572 2839 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:01:01.159489 kubelet[2839]: I0430 13:01:01.158605 2839 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:01:01.159489 kubelet[2839]: I0430 13:01:01.158796 2839 state_mem.go:75] "Updated machine memory state" Apr 30 13:01:01.163407 kubelet[2839]: I0430 13:01:01.162929 2839 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.165001 kubelet[2839]: I0430 13:01:01.164976 2839 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.165269 kubelet[2839]: E0430 13:01:01.165248 2839 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 13:01:01.165359 kubelet[2839]: I0430 13:01:01.165163 2839 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:01:01.165609 kubelet[2839]: I0430 13:01:01.165573 2839 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:01:01.165771 kubelet[2839]: I0430 13:01:01.165756 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:01:01.360300 sudo[2869]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 13:01:01.361127 sudo[2869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 13:01:01.365575 kubelet[2839]: I0430 13:01:01.365533 2839 topology_manager.go:215] "Topology Admit Handler" podUID="f6c9a4de2476976a7786c461fc3d1c1f" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.367293 kubelet[2839]: I0430 13:01:01.365863 2839 topology_manager.go:215] "Topology Admit Handler" podUID="f35cdd210154447b596bbc2b3bc86295" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.367293 kubelet[2839]: I0430 13:01:01.365916 2839 topology_manager.go:215] "Topology Admit Handler" podUID="2a1cc69b1fc92b14b8afab2c358433fd" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.383533 kubelet[2839]: E0430 13:01:01.383487 2839 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444175 kubelet[2839]: I0430 13:01:01.444134 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6c9a4de2476976a7786c461fc3d1c1f-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f6c9a4de2476976a7786c461fc3d1c1f\") " pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444175 kubelet[2839]: I0430 13:01:01.444176 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6c9a4de2476976a7786c461fc3d1c1f-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f6c9a4de2476976a7786c461fc3d1c1f\") " pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444175 kubelet[2839]: I0430 13:01:01.444205 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444175 kubelet[2839]: I0430 13:01:01.444226 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444175 kubelet[2839]: I0430 13:01:01.444248 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444549 kubelet[2839]: I0430 13:01:01.444268 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a1cc69b1fc92b14b8afab2c358433fd-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-f-bd31e1b44e\" (UID: \"2a1cc69b1fc92b14b8afab2c358433fd\") " pod="kube-system/kube-scheduler-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444549 kubelet[2839]: I0430 13:01:01.444287 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6c9a4de2476976a7786c461fc3d1c1f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f6c9a4de2476976a7786c461fc3d1c1f\") " pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444549 kubelet[2839]: I0430 13:01:01.444304 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.444549 kubelet[2839]: I0430 13:01:01.444321 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f35cdd210154447b596bbc2b3bc86295-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-f-bd31e1b44e\" (UID: \"f35cdd210154447b596bbc2b3bc86295\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:01.851086 sudo[2869]: pam_unix(sudo:session): session closed for user root Apr 30 13:01:02.015605 kubelet[2839]: I0430 13:01:02.015537 2839 apiserver.go:52] "Watching apiserver" Apr 30 13:01:02.042687 kubelet[2839]: I0430 13:01:02.042651 2839 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 13:01:02.128512 kubelet[2839]: E0430 13:01:02.127889 2839 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-1-f-bd31e1b44e\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" Apr 30 13:01:02.162998 kubelet[2839]: I0430 13:01:02.162925 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-f-bd31e1b44e" podStartSLOduration=1.162880511 podStartE2EDuration="1.162880511s" podCreationTimestamp="2025-04-30 13:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:01:02.161674191 +0000 UTC m=+1.246127714" watchObservedRunningTime="2025-04-30 13:01:02.162880511 +0000 UTC m=+1.247334034" Apr 30 13:01:02.163416 kubelet[2839]: I0430 13:01:02.163143 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-f-bd31e1b44e" podStartSLOduration=1.163135903 podStartE2EDuration="1.163135903s" podCreationTimestamp="2025-04-30 13:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:01:02.147361175 +0000 UTC m=+1.231814698" watchObservedRunningTime="2025-04-30 13:01:02.163135903 +0000 UTC m=+1.247589426" Apr 30 13:01:02.195835 kubelet[2839]: I0430 13:01:02.195717 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-f-bd31e1b44e" podStartSLOduration=3.195699047 podStartE2EDuration="3.195699047s" podCreationTimestamp="2025-04-30 13:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:01:02.180674055 +0000 UTC m=+1.265127578" watchObservedRunningTime="2025-04-30 13:01:02.195699047 +0000 UTC m=+1.280152570" Apr 30 13:01:03.924738 sudo[1879]: pam_unix(sudo:session): session closed for user root Apr 30 13:01:04.086512 sshd[1878]: Connection closed by 139.178.89.65 port 49048 Apr 30 13:01:04.087413 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Apr 30 13:01:04.091888 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Apr 30 13:01:04.092801 systemd[1]: sshd@6-91.99.82.124:22-139.178.89.65:49048.service: Deactivated successfully. Apr 30 13:01:04.096283 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 13:01:04.097265 systemd[1]: session-7.scope: Consumed 7.650s CPU time, 294.5M memory peak. Apr 30 13:01:04.100193 systemd-logind[1477]: Removed session 7. Apr 30 13:01:14.256086 kubelet[2839]: I0430 13:01:14.255759 2839 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 13:01:14.256690 containerd[1503]: time="2025-04-30T13:01:14.256522600Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 13:01:14.257383 kubelet[2839]: I0430 13:01:14.257108 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 13:01:15.049293 kubelet[2839]: I0430 13:01:15.049233 2839 topology_manager.go:215] "Topology Admit Handler" podUID="c5fa97cc-4a23-4df8-807f-fcb456311862" podNamespace="kube-system" podName="kube-proxy-ktkz6" Apr 30 13:01:15.062680 systemd[1]: Created slice kubepods-besteffort-podc5fa97cc_4a23_4df8_807f_fcb456311862.slice - libcontainer container kubepods-besteffort-podc5fa97cc_4a23_4df8_807f_fcb456311862.slice. Apr 30 13:01:15.079960 kubelet[2839]: I0430 13:01:15.079867 2839 topology_manager.go:215] "Topology Admit Handler" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" podNamespace="kube-system" podName="cilium-x6bx5" Apr 30 13:01:15.088154 kubelet[2839]: W0430 13:01:15.086103 2839 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-1-1-f-bd31e1b44e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-f-bd31e1b44e' and this object Apr 30 13:01:15.088154 kubelet[2839]: E0430 13:01:15.086141 2839 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-1-1-f-bd31e1b44e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-f-bd31e1b44e' and this object Apr 30 13:01:15.088154 kubelet[2839]: W0430 13:01:15.086183 2839 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-1-1-f-bd31e1b44e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-f-bd31e1b44e' and this object Apr 30 13:01:15.088154 kubelet[2839]: E0430 13:01:15.086193 2839 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-1-1-f-bd31e1b44e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-f-bd31e1b44e' and this object Apr 30 13:01:15.088154 kubelet[2839]: W0430 13:01:15.086224 2839 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-1-1-f-bd31e1b44e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-f-bd31e1b44e' and this object Apr 30 13:01:15.088365 kubelet[2839]: E0430 13:01:15.086232 2839 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-1-1-f-bd31e1b44e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-f-bd31e1b44e' and this object Apr 30 13:01:15.091413 systemd[1]: Created slice kubepods-burstable-podb30ccadd_3850_43d9_83ca_8074998b853e.slice - libcontainer container kubepods-burstable-podb30ccadd_3850_43d9_83ca_8074998b853e.slice. Apr 30 13:01:15.134432 kubelet[2839]: I0430 13:01:15.134377 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-run\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134432 kubelet[2839]: I0430 13:01:15.134428 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svf74\" (UniqueName: \"kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-kube-api-access-svf74\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134660 kubelet[2839]: I0430 13:01:15.134452 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wrv\" (UniqueName: \"kubernetes.io/projected/c5fa97cc-4a23-4df8-807f-fcb456311862-kube-api-access-z6wrv\") pod \"kube-proxy-ktkz6\" (UID: \"c5fa97cc-4a23-4df8-807f-fcb456311862\") " pod="kube-system/kube-proxy-ktkz6" Apr 30 13:01:15.134660 kubelet[2839]: I0430 13:01:15.134469 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-cgroup\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134660 kubelet[2839]: I0430 13:01:15.134495 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cni-path\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134660 kubelet[2839]: I0430 13:01:15.134510 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b30ccadd-3850-43d9-83ca-8074998b853e-clustermesh-secrets\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134660 kubelet[2839]: I0430 13:01:15.134524 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-bpf-maps\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134660 kubelet[2839]: I0430 13:01:15.134539 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5fa97cc-4a23-4df8-807f-fcb456311862-kube-proxy\") pod \"kube-proxy-ktkz6\" (UID: \"c5fa97cc-4a23-4df8-807f-fcb456311862\") " pod="kube-system/kube-proxy-ktkz6" Apr 30 13:01:15.134783 kubelet[2839]: I0430 13:01:15.134563 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-lib-modules\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134783 kubelet[2839]: I0430 13:01:15.134578 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-hubble-tls\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134783 kubelet[2839]: I0430 13:01:15.134594 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5fa97cc-4a23-4df8-807f-fcb456311862-lib-modules\") pod \"kube-proxy-ktkz6\" (UID: \"c5fa97cc-4a23-4df8-807f-fcb456311862\") " pod="kube-system/kube-proxy-ktkz6" Apr 30 13:01:15.134783 kubelet[2839]: I0430 13:01:15.134610 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-config-path\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134783 kubelet[2839]: I0430 13:01:15.134633 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5fa97cc-4a23-4df8-807f-fcb456311862-xtables-lock\") pod \"kube-proxy-ktkz6\" (UID: \"c5fa97cc-4a23-4df8-807f-fcb456311862\") " pod="kube-system/kube-proxy-ktkz6" Apr 30 13:01:15.134783 kubelet[2839]: I0430 13:01:15.134683 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-hostproc\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134959 kubelet[2839]: I0430 13:01:15.134699 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-etc-cni-netd\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134959 kubelet[2839]: I0430 13:01:15.134715 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-net\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134959 kubelet[2839]: I0430 13:01:15.134744 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-xtables-lock\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.134959 kubelet[2839]: I0430 13:01:15.134764 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-kernel\") pod \"cilium-x6bx5\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " pod="kube-system/cilium-x6bx5" Apr 30 13:01:15.318006 kubelet[2839]: I0430 13:01:15.316696 2839 topology_manager.go:215] "Topology Admit Handler" podUID="100e574d-198a-4b95-a363-bc8b3a576912" podNamespace="kube-system" podName="cilium-operator-599987898-b57jb" Apr 30 13:01:15.327375 systemd[1]: Created slice kubepods-besteffort-pod100e574d_198a_4b95_a363_bc8b3a576912.slice - libcontainer container kubepods-besteffort-pod100e574d_198a_4b95_a363_bc8b3a576912.slice. Apr 30 13:01:15.336866 kubelet[2839]: I0430 13:01:15.336748 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/100e574d-198a-4b95-a363-bc8b3a576912-cilium-config-path\") pod \"cilium-operator-599987898-b57jb\" (UID: \"100e574d-198a-4b95-a363-bc8b3a576912\") " pod="kube-system/cilium-operator-599987898-b57jb" Apr 30 13:01:15.336866 kubelet[2839]: I0430 13:01:15.336789 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z4q\" (UniqueName: \"kubernetes.io/projected/100e574d-198a-4b95-a363-bc8b3a576912-kube-api-access-g4z4q\") pod \"cilium-operator-599987898-b57jb\" (UID: \"100e574d-198a-4b95-a363-bc8b3a576912\") " pod="kube-system/cilium-operator-599987898-b57jb" Apr 30 13:01:15.375379 containerd[1503]: time="2025-04-30T13:01:15.375295086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktkz6,Uid:c5fa97cc-4a23-4df8-807f-fcb456311862,Namespace:kube-system,Attempt:0,}" Apr 30 13:01:15.405180 containerd[1503]: time="2025-04-30T13:01:15.404470258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:01:15.405180 containerd[1503]: time="2025-04-30T13:01:15.404547338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:01:15.405180 containerd[1503]: time="2025-04-30T13:01:15.404563498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:15.405180 containerd[1503]: time="2025-04-30T13:01:15.404686738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:15.438482 systemd[1]: Started cri-containerd-7307b7172c5edb69797190375416e091aeccd92ba585af392ac88c5ac333ba5b.scope - libcontainer container 7307b7172c5edb69797190375416e091aeccd92ba585af392ac88c5ac333ba5b. Apr 30 13:01:15.464689 containerd[1503]: time="2025-04-30T13:01:15.464614560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktkz6,Uid:c5fa97cc-4a23-4df8-807f-fcb456311862,Namespace:kube-system,Attempt:0,} returns sandbox id \"7307b7172c5edb69797190375416e091aeccd92ba585af392ac88c5ac333ba5b\"" Apr 30 13:01:15.470418 containerd[1503]: time="2025-04-30T13:01:15.470283594Z" level=info msg="CreateContainer within sandbox \"7307b7172c5edb69797190375416e091aeccd92ba585af392ac88c5ac333ba5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 13:01:15.487214 containerd[1503]: time="2025-04-30T13:01:15.487162258Z" level=info msg="CreateContainer within sandbox \"7307b7172c5edb69797190375416e091aeccd92ba585af392ac88c5ac333ba5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d41bbea87f87c427c80525a54dc22256174d8d7c325da02e9b8e7c6685db8f69\"" Apr 30 13:01:15.488202 containerd[1503]: time="2025-04-30T13:01:15.488094737Z" level=info msg="StartContainer for \"d41bbea87f87c427c80525a54dc22256174d8d7c325da02e9b8e7c6685db8f69\"" Apr 30 13:01:15.526248 systemd[1]: Started cri-containerd-d41bbea87f87c427c80525a54dc22256174d8d7c325da02e9b8e7c6685db8f69.scope - libcontainer container d41bbea87f87c427c80525a54dc22256174d8d7c325da02e9b8e7c6685db8f69. Apr 30 13:01:15.562785 containerd[1503]: time="2025-04-30T13:01:15.562601506Z" level=info msg="StartContainer for \"d41bbea87f87c427c80525a54dc22256174d8d7c325da02e9b8e7c6685db8f69\" returns successfully" Apr 30 13:01:16.166100 kubelet[2839]: I0430 13:01:16.164712 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ktkz6" podStartSLOduration=1.164684682 podStartE2EDuration="1.164684682s" podCreationTimestamp="2025-04-30 13:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:01:16.163760202 +0000 UTC m=+15.248213725" watchObservedRunningTime="2025-04-30 13:01:16.164684682 +0000 UTC m=+15.249138245" Apr 30 13:01:16.237995 kubelet[2839]: E0430 13:01:16.237922 2839 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 30 13:01:16.237995 kubelet[2839]: E0430 13:01:16.237965 2839 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 30 13:01:16.238336 kubelet[2839]: E0430 13:01:16.238163 2839 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-config-path podName:b30ccadd-3850-43d9-83ca-8074998b853e nodeName:}" failed. No retries permitted until 2025-04-30 13:01:16.738110113 +0000 UTC m=+15.822563676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-config-path") pod "cilium-x6bx5" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e") : failed to sync configmap cache: timed out waiting for the condition Apr 30 13:01:16.238599 kubelet[2839]: E0430 13:01:16.237971 2839 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-x6bx5: failed to sync secret cache: timed out waiting for the condition Apr 30 13:01:16.238599 kubelet[2839]: E0430 13:01:16.238565 2839 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-hubble-tls podName:b30ccadd-3850-43d9-83ca-8074998b853e nodeName:}" failed. No retries permitted until 2025-04-30 13:01:16.738535793 +0000 UTC m=+15.822989316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-hubble-tls") pod "cilium-x6bx5" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e") : failed to sync secret cache: timed out waiting for the condition Apr 30 13:01:16.263346 systemd[1]: run-containerd-runc-k8s.io-7307b7172c5edb69797190375416e091aeccd92ba585af392ac88c5ac333ba5b-runc.67D89h.mount: Deactivated successfully. Apr 30 13:01:16.438529 kubelet[2839]: E0430 13:01:16.438086 2839 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 30 13:01:16.438529 kubelet[2839]: E0430 13:01:16.438182 2839 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/100e574d-198a-4b95-a363-bc8b3a576912-cilium-config-path podName:100e574d-198a-4b95-a363-bc8b3a576912 nodeName:}" failed. No retries permitted until 2025-04-30 13:01:16.938160825 +0000 UTC m=+16.022614348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/100e574d-198a-4b95-a363-bc8b3a576912-cilium-config-path") pod "cilium-operator-599987898-b57jb" (UID: "100e574d-198a-4b95-a363-bc8b3a576912") : failed to sync configmap cache: timed out waiting for the condition Apr 30 13:01:16.900319 containerd[1503]: time="2025-04-30T13:01:16.900042229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6bx5,Uid:b30ccadd-3850-43d9-83ca-8074998b853e,Namespace:kube-system,Attempt:0,}" Apr 30 13:01:16.926254 containerd[1503]: time="2025-04-30T13:01:16.926152774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:01:16.926518 containerd[1503]: time="2025-04-30T13:01:16.926422415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:01:16.926518 containerd[1503]: time="2025-04-30T13:01:16.926461735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:16.926841 containerd[1503]: time="2025-04-30T13:01:16.926763335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:16.949241 systemd[1]: Started cri-containerd-3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad.scope - libcontainer container 3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad. Apr 30 13:01:16.976744 containerd[1503]: time="2025-04-30T13:01:16.976429943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6bx5,Uid:b30ccadd-3850-43d9-83ca-8074998b853e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\"" Apr 30 13:01:16.979177 containerd[1503]: time="2025-04-30T13:01:16.979143705Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 13:01:17.132923 containerd[1503]: time="2025-04-30T13:01:17.132780259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b57jb,Uid:100e574d-198a-4b95-a363-bc8b3a576912,Namespace:kube-system,Attempt:0,}" Apr 30 13:01:17.159595 containerd[1503]: time="2025-04-30T13:01:17.159461495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:01:17.159595 containerd[1503]: time="2025-04-30T13:01:17.159526135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:01:17.159595 containerd[1503]: time="2025-04-30T13:01:17.159543175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:17.159595 containerd[1503]: time="2025-04-30T13:01:17.159650335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:17.177247 systemd[1]: Started cri-containerd-026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a.scope - libcontainer container 026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a. Apr 30 13:01:17.217658 containerd[1503]: time="2025-04-30T13:01:17.217589099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b57jb,Uid:100e574d-198a-4b95-a363-bc8b3a576912,Namespace:kube-system,Attempt:0,} returns sandbox id \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\"" Apr 30 13:01:17.261968 systemd[1]: run-containerd-runc-k8s.io-3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad-runc.fqrmVq.mount: Deactivated successfully. Apr 30 13:01:21.693576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475443343.mount: Deactivated successfully. Apr 30 13:01:23.049702 containerd[1503]: time="2025-04-30T13:01:23.049633806Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:01:23.051583 containerd[1503]: time="2025-04-30T13:01:23.051495630Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 13:01:23.052760 containerd[1503]: time="2025-04-30T13:01:23.052699286Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:01:23.054143 containerd[1503]: time="2025-04-30T13:01:23.054089504Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.074720399s" Apr 30 13:01:23.054143 containerd[1503]: time="2025-04-30T13:01:23.054136504Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 13:01:23.056738 containerd[1503]: time="2025-04-30T13:01:23.056341173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 13:01:23.059693 containerd[1503]: time="2025-04-30T13:01:23.058945326Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 13:01:23.074148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153072778.mount: Deactivated successfully. Apr 30 13:01:23.085591 containerd[1503]: time="2025-04-30T13:01:23.085533628Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134\"" Apr 30 13:01:23.088336 containerd[1503]: time="2025-04-30T13:01:23.086435679Z" level=info msg="StartContainer for \"57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134\"" Apr 30 13:01:23.119979 systemd[1]: run-containerd-runc-k8s.io-57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134-runc.P2b7Mv.mount: Deactivated successfully. Apr 30 13:01:23.129609 systemd[1]: Started cri-containerd-57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134.scope - libcontainer container 57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134. Apr 30 13:01:23.167308 containerd[1503]: time="2025-04-30T13:01:23.167119396Z" level=info msg="StartContainer for \"57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134\" returns successfully" Apr 30 13:01:23.193742 systemd[1]: cri-containerd-57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134.scope: Deactivated successfully. Apr 30 13:01:23.392660 containerd[1503]: time="2025-04-30T13:01:23.392362530Z" level=info msg="shim disconnected" id=57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134 namespace=k8s.io Apr 30 13:01:23.392660 containerd[1503]: time="2025-04-30T13:01:23.392466131Z" level=warning msg="cleaning up after shim disconnected" id=57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134 namespace=k8s.io Apr 30 13:01:23.392660 containerd[1503]: time="2025-04-30T13:01:23.392477491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:01:24.071937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134-rootfs.mount: Deactivated successfully. Apr 30 13:01:24.189626 containerd[1503]: time="2025-04-30T13:01:24.187086578Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 13:01:24.221097 containerd[1503]: time="2025-04-30T13:01:24.220905943Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9\"" Apr 30 13:01:24.223585 containerd[1503]: time="2025-04-30T13:01:24.223369178Z" level=info msg="StartContainer for \"37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9\"" Apr 30 13:01:24.258255 systemd[1]: Started cri-containerd-37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9.scope - libcontainer container 37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9. Apr 30 13:01:24.286946 containerd[1503]: time="2025-04-30T13:01:24.286614965Z" level=info msg="StartContainer for \"37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9\" returns successfully" Apr 30 13:01:24.302127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:01:24.302812 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:01:24.303428 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:01:24.311839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:01:24.312335 systemd[1]: cri-containerd-37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9.scope: Deactivated successfully. Apr 30 13:01:24.344328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:01:24.353094 containerd[1503]: time="2025-04-30T13:01:24.352926436Z" level=info msg="shim disconnected" id=37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9 namespace=k8s.io Apr 30 13:01:24.353094 containerd[1503]: time="2025-04-30T13:01:24.353094358Z" level=warning msg="cleaning up after shim disconnected" id=37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9 namespace=k8s.io Apr 30 13:01:24.353333 containerd[1503]: time="2025-04-30T13:01:24.353110678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:01:25.072620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9-rootfs.mount: Deactivated successfully. Apr 30 13:01:25.190283 containerd[1503]: time="2025-04-30T13:01:25.190244796Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 13:01:25.231982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928601737.mount: Deactivated successfully. Apr 30 13:01:25.241279 containerd[1503]: time="2025-04-30T13:01:25.241235961Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d\"" Apr 30 13:01:25.242970 containerd[1503]: time="2025-04-30T13:01:25.242919748Z" level=info msg="StartContainer for \"6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d\"" Apr 30 13:01:25.295191 systemd[1]: Started cri-containerd-6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d.scope - libcontainer container 6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d. Apr 30 13:01:25.350803 systemd[1]: cri-containerd-6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d.scope: Deactivated successfully. Apr 30 13:01:25.354158 containerd[1503]: time="2025-04-30T13:01:25.353927100Z" level=info msg="StartContainer for \"6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d\" returns successfully" Apr 30 13:01:25.407723 containerd[1503]: time="2025-04-30T13:01:25.407601708Z" level=info msg="shim disconnected" id=6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d namespace=k8s.io Apr 30 13:01:25.408113 containerd[1503]: time="2025-04-30T13:01:25.407886992Z" level=warning msg="cleaning up after shim disconnected" id=6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d namespace=k8s.io Apr 30 13:01:25.408113 containerd[1503]: time="2025-04-30T13:01:25.407903832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:01:25.613149 containerd[1503]: time="2025-04-30T13:01:25.612986950Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:01:25.613998 containerd[1503]: time="2025-04-30T13:01:25.613931165Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 13:01:25.615240 containerd[1503]: time="2025-04-30T13:01:25.614931660Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:01:25.617186 containerd[1503]: time="2025-04-30T13:01:25.617127575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.560725642s" Apr 30 13:01:25.617309 containerd[1503]: time="2025-04-30T13:01:25.617196656Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 13:01:25.622030 containerd[1503]: time="2025-04-30T13:01:25.621856650Z" level=info msg="CreateContainer within sandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 13:01:25.642347 containerd[1503]: time="2025-04-30T13:01:25.642204731Z" level=info msg="CreateContainer within sandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\"" Apr 30 13:01:25.644072 containerd[1503]: time="2025-04-30T13:01:25.642937663Z" level=info msg="StartContainer for \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\"" Apr 30 13:01:25.678231 systemd[1]: Started cri-containerd-0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb.scope - libcontainer container 0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb. Apr 30 13:01:25.713752 containerd[1503]: time="2025-04-30T13:01:25.713643659Z" level=info msg="StartContainer for \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\" returns successfully" Apr 30 13:01:26.074695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d-rootfs.mount: Deactivated successfully. Apr 30 13:01:26.198913 containerd[1503]: time="2025-04-30T13:01:26.198325826Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 13:01:26.225858 containerd[1503]: time="2025-04-30T13:01:26.225591415Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a\"" Apr 30 13:01:26.227217 containerd[1503]: time="2025-04-30T13:01:26.227129201Z" level=info msg="StartContainer for \"ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a\"" Apr 30 13:01:26.279644 systemd[1]: Started cri-containerd-ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a.scope - libcontainer container ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a. Apr 30 13:01:26.336555 containerd[1503]: time="2025-04-30T13:01:26.335719508Z" level=info msg="StartContainer for \"ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a\" returns successfully" Apr 30 13:01:26.338332 systemd[1]: cri-containerd-ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a.scope: Deactivated successfully. Apr 30 13:01:26.396476 containerd[1503]: time="2025-04-30T13:01:26.396389710Z" level=info msg="shim disconnected" id=ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a namespace=k8s.io Apr 30 13:01:26.396476 containerd[1503]: time="2025-04-30T13:01:26.396467432Z" level=warning msg="cleaning up after shim disconnected" id=ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a namespace=k8s.io Apr 30 13:01:26.396476 containerd[1503]: time="2025-04-30T13:01:26.396477872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:01:27.074084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a-rootfs.mount: Deactivated successfully. Apr 30 13:01:27.212910 containerd[1503]: time="2025-04-30T13:01:27.212860909Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 13:01:27.244557 containerd[1503]: time="2025-04-30T13:01:27.241621963Z" level=info msg="CreateContainer within sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\"" Apr 30 13:01:27.244786 kubelet[2839]: I0430 13:01:27.242821 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-b57jb" podStartSLOduration=3.84426934 podStartE2EDuration="12.242800505s" podCreationTimestamp="2025-04-30 13:01:15 +0000 UTC" firstStartedPulling="2025-04-30 13:01:17.219547705 +0000 UTC m=+16.304001228" lastFinishedPulling="2025-04-30 13:01:25.61807887 +0000 UTC m=+24.702532393" observedRunningTime="2025-04-30 13:01:26.341944935 +0000 UTC m=+25.426398418" watchObservedRunningTime="2025-04-30 13:01:27.242800505 +0000 UTC m=+26.327254028" Apr 30 13:01:27.245194 containerd[1503]: time="2025-04-30T13:01:27.245161708Z" level=info msg="StartContainer for \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\"" Apr 30 13:01:27.290241 systemd[1]: Started cri-containerd-d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6.scope - libcontainer container d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6. Apr 30 13:01:27.327637 containerd[1503]: time="2025-04-30T13:01:27.327504875Z" level=info msg="StartContainer for \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\" returns successfully" Apr 30 13:01:27.437533 kubelet[2839]: I0430 13:01:27.437487 2839 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 13:01:27.472245 kubelet[2839]: I0430 13:01:27.472191 2839 topology_manager.go:215] "Topology Admit Handler" podUID="68719ba3-e3b6-46b1-ae10-6cd177a70c0a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wc9gg" Apr 30 13:01:27.478559 kubelet[2839]: I0430 13:01:27.477876 2839 topology_manager.go:215] "Topology Admit Handler" podUID="e4168765-42d3-4ff5-a28e-d46a02516874" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m8bzg" Apr 30 13:01:27.488203 systemd[1]: Created slice kubepods-burstable-pod68719ba3_e3b6_46b1_ae10_6cd177a70c0a.slice - libcontainer container kubepods-burstable-pod68719ba3_e3b6_46b1_ae10_6cd177a70c0a.slice. Apr 30 13:01:27.498414 systemd[1]: Created slice kubepods-burstable-pode4168765_42d3_4ff5_a28e_d46a02516874.slice - libcontainer container kubepods-burstable-pode4168765_42d3_4ff5_a28e_d46a02516874.slice. Apr 30 13:01:27.621293 kubelet[2839]: I0430 13:01:27.621087 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnh8\" (UniqueName: \"kubernetes.io/projected/e4168765-42d3-4ff5-a28e-d46a02516874-kube-api-access-5rnh8\") pod \"coredns-7db6d8ff4d-m8bzg\" (UID: \"e4168765-42d3-4ff5-a28e-d46a02516874\") " pod="kube-system/coredns-7db6d8ff4d-m8bzg" Apr 30 13:01:27.621293 kubelet[2839]: I0430 13:01:27.621199 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68719ba3-e3b6-46b1-ae10-6cd177a70c0a-config-volume\") pod \"coredns-7db6d8ff4d-wc9gg\" (UID: \"68719ba3-e3b6-46b1-ae10-6cd177a70c0a\") " pod="kube-system/coredns-7db6d8ff4d-wc9gg" Apr 30 13:01:27.621293 kubelet[2839]: I0430 13:01:27.621225 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czk4q\" (UniqueName: \"kubernetes.io/projected/68719ba3-e3b6-46b1-ae10-6cd177a70c0a-kube-api-access-czk4q\") pod \"coredns-7db6d8ff4d-wc9gg\" (UID: \"68719ba3-e3b6-46b1-ae10-6cd177a70c0a\") " pod="kube-system/coredns-7db6d8ff4d-wc9gg" Apr 30 13:01:27.621293 kubelet[2839]: I0430 13:01:27.621247 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4168765-42d3-4ff5-a28e-d46a02516874-config-volume\") pod \"coredns-7db6d8ff4d-m8bzg\" (UID: \"e4168765-42d3-4ff5-a28e-d46a02516874\") " pod="kube-system/coredns-7db6d8ff4d-m8bzg" Apr 30 13:01:27.796038 containerd[1503]: time="2025-04-30T13:01:27.795294150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wc9gg,Uid:68719ba3-e3b6-46b1-ae10-6cd177a70c0a,Namespace:kube-system,Attempt:0,}" Apr 30 13:01:27.803946 containerd[1503]: time="2025-04-30T13:01:27.803885869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m8bzg,Uid:e4168765-42d3-4ff5-a28e-d46a02516874,Namespace:kube-system,Attempt:0,}" Apr 30 13:01:28.241844 kubelet[2839]: I0430 13:01:28.241450 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x6bx5" podStartSLOduration=7.163588513 podStartE2EDuration="13.241424459s" podCreationTimestamp="2025-04-30 13:01:15 +0000 UTC" firstStartedPulling="2025-04-30 13:01:16.978306824 +0000 UTC m=+16.062760347" lastFinishedPulling="2025-04-30 13:01:23.05614265 +0000 UTC m=+22.140596293" observedRunningTime="2025-04-30 13:01:28.238286316 +0000 UTC m=+27.322739879" watchObservedRunningTime="2025-04-30 13:01:28.241424459 +0000 UTC m=+27.325877982" Apr 30 13:01:29.658975 systemd-networkd[1378]: cilium_host: Link UP Apr 30 13:01:29.659129 systemd-networkd[1378]: cilium_net: Link UP Apr 30 13:01:29.659132 systemd-networkd[1378]: cilium_net: Gained carrier Apr 30 13:01:29.659274 systemd-networkd[1378]: cilium_host: Gained carrier Apr 30 13:01:29.659400 systemd-networkd[1378]: cilium_host: Gained IPv6LL Apr 30 13:01:29.698338 systemd-networkd[1378]: cilium_net: Gained IPv6LL Apr 30 13:01:29.798092 systemd-networkd[1378]: cilium_vxlan: Link UP Apr 30 13:01:29.798100 systemd-networkd[1378]: cilium_vxlan: Gained carrier Apr 30 13:01:30.098116 kernel: NET: Registered PF_ALG protocol family Apr 30 13:01:30.882744 systemd-networkd[1378]: lxc_health: Link UP Apr 30 13:01:30.890092 systemd-networkd[1378]: lxc_health: Gained carrier Apr 30 13:01:31.366379 kernel: eth0: renamed from tmp40d6d Apr 30 13:01:31.368681 systemd-networkd[1378]: lxc4adffc053fd5: Link UP Apr 30 13:01:31.376346 systemd-networkd[1378]: lxc4adffc053fd5: Gained carrier Apr 30 13:01:31.388056 kernel: eth0: renamed from tmpaf813 Apr 30 13:01:31.393734 systemd-networkd[1378]: lxce0b416580897: Link UP Apr 30 13:01:31.398213 systemd-networkd[1378]: lxce0b416580897: Gained carrier Apr 30 13:01:31.736651 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Apr 30 13:01:32.440331 systemd-networkd[1378]: lxc_health: Gained IPv6LL Apr 30 13:01:32.698064 systemd-networkd[1378]: lxc4adffc053fd5: Gained IPv6LL Apr 30 13:01:33.272963 systemd-networkd[1378]: lxce0b416580897: Gained IPv6LL Apr 30 13:01:33.425778 kubelet[2839]: I0430 13:01:33.425702 2839 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:01:35.531761 containerd[1503]: time="2025-04-30T13:01:35.531180674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:01:35.531761 containerd[1503]: time="2025-04-30T13:01:35.531719930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:01:35.531761 containerd[1503]: time="2025-04-30T13:01:35.531735730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:35.532520 containerd[1503]: time="2025-04-30T13:01:35.531903735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:35.561852 systemd[1]: Started cri-containerd-af813646d64a36f5b7e9414ed92e238dacac21db73651ce8d802e8dfe4d1d4d9.scope - libcontainer container af813646d64a36f5b7e9414ed92e238dacac21db73651ce8d802e8dfe4d1d4d9. Apr 30 13:01:35.590343 containerd[1503]: time="2025-04-30T13:01:35.589363662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:01:35.590343 containerd[1503]: time="2025-04-30T13:01:35.589610909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:01:35.590343 containerd[1503]: time="2025-04-30T13:01:35.589687711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:35.590343 containerd[1503]: time="2025-04-30T13:01:35.590043601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:01:35.633900 systemd[1]: Started cri-containerd-40d6d8f03ec24e3e43517d6e97aed239519e023f5837e83f67cc9748f72f558a.scope - libcontainer container 40d6d8f03ec24e3e43517d6e97aed239519e023f5837e83f67cc9748f72f558a. Apr 30 13:01:35.642070 containerd[1503]: time="2025-04-30T13:01:35.641891652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m8bzg,Uid:e4168765-42d3-4ff5-a28e-d46a02516874,Namespace:kube-system,Attempt:0,} returns sandbox id \"af813646d64a36f5b7e9414ed92e238dacac21db73651ce8d802e8dfe4d1d4d9\"" Apr 30 13:01:35.647932 containerd[1503]: time="2025-04-30T13:01:35.647425167Z" level=info msg="CreateContainer within sandbox \"af813646d64a36f5b7e9414ed92e238dacac21db73651ce8d802e8dfe4d1d4d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:01:35.669241 containerd[1503]: time="2025-04-30T13:01:35.669108973Z" level=info msg="CreateContainer within sandbox \"af813646d64a36f5b7e9414ed92e238dacac21db73651ce8d802e8dfe4d1d4d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d27efe26f8b59c7e2b9d49cb5d6213a9ab53c8015e69cd3c4707481356077f12\"" Apr 30 13:01:35.670691 containerd[1503]: time="2025-04-30T13:01:35.670403330Z" level=info msg="StartContainer for \"d27efe26f8b59c7e2b9d49cb5d6213a9ab53c8015e69cd3c4707481356077f12\"" Apr 30 13:01:35.694934 containerd[1503]: time="2025-04-30T13:01:35.694269757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wc9gg,Uid:68719ba3-e3b6-46b1-ae10-6cd177a70c0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"40d6d8f03ec24e3e43517d6e97aed239519e023f5837e83f67cc9748f72f558a\"" Apr 30 13:01:35.719656 containerd[1503]: time="2025-04-30T13:01:35.719485583Z" level=info msg="CreateContainer within sandbox \"40d6d8f03ec24e3e43517d6e97aed239519e023f5837e83f67cc9748f72f558a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:01:35.737572 systemd[1]: Started cri-containerd-d27efe26f8b59c7e2b9d49cb5d6213a9ab53c8015e69cd3c4707481356077f12.scope - libcontainer container d27efe26f8b59c7e2b9d49cb5d6213a9ab53c8015e69cd3c4707481356077f12. Apr 30 13:01:35.747377 containerd[1503]: time="2025-04-30T13:01:35.747318122Z" level=info msg="CreateContainer within sandbox \"40d6d8f03ec24e3e43517d6e97aed239519e023f5837e83f67cc9748f72f558a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cd776740ff3d76932c7e54347e34ea6cd2a59db11f3bdf347bdcd2eaeba5cc8\"" Apr 30 13:01:35.748705 containerd[1503]: time="2025-04-30T13:01:35.748624758Z" level=info msg="StartContainer for \"7cd776740ff3d76932c7e54347e34ea6cd2a59db11f3bdf347bdcd2eaeba5cc8\"" Apr 30 13:01:35.793434 containerd[1503]: time="2025-04-30T13:01:35.792114815Z" level=info msg="StartContainer for \"d27efe26f8b59c7e2b9d49cb5d6213a9ab53c8015e69cd3c4707481356077f12\" returns successfully" Apr 30 13:01:35.801381 systemd[1]: Started cri-containerd-7cd776740ff3d76932c7e54347e34ea6cd2a59db11f3bdf347bdcd2eaeba5cc8.scope - libcontainer container 7cd776740ff3d76932c7e54347e34ea6cd2a59db11f3bdf347bdcd2eaeba5cc8. Apr 30 13:01:35.851422 containerd[1503]: time="2025-04-30T13:01:35.851358393Z" level=info msg="StartContainer for \"7cd776740ff3d76932c7e54347e34ea6cd2a59db11f3bdf347bdcd2eaeba5cc8\" returns successfully" Apr 30 13:01:36.269157 kubelet[2839]: I0430 13:01:36.268959 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m8bzg" podStartSLOduration=21.268938389 podStartE2EDuration="21.268938389s" podCreationTimestamp="2025-04-30 13:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:01:36.264947913 +0000 UTC m=+35.349401476" watchObservedRunningTime="2025-04-30 13:01:36.268938389 +0000 UTC m=+35.353391912" Apr 30 13:01:36.282189 kubelet[2839]: I0430 13:01:36.282121 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wc9gg" podStartSLOduration=21.282104251 podStartE2EDuration="21.282104251s" podCreationTimestamp="2025-04-30 13:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:01:36.281574795 +0000 UTC m=+35.366028318" watchObservedRunningTime="2025-04-30 13:01:36.282104251 +0000 UTC m=+35.366557774" Apr 30 13:04:24.900494 update_engine[1479]: I20250430 13:04:24.900183 1479 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 13:04:24.900494 update_engine[1479]: I20250430 13:04:24.900286 1479 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.900653 1479 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902560 1479 omaha_request_params.cc:62] Current group set to beta Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902683 1479 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902695 1479 update_attempter.cc:643] Scheduling an action processor start. Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902719 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902765 1479 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902827 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902849 1479 omaha_request_action.cc:272] Request: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.902855 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.904738 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:04:24.905497 update_engine[1479]: I20250430 13:04:24.905164 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:04:24.905879 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 13:04:24.908411 update_engine[1479]: E20250430 13:04:24.908283 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:04:24.908411 update_engine[1479]: I20250430 13:04:24.908377 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 13:04:34.801121 update_engine[1479]: I20250430 13:04:34.800709 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:04:34.801792 update_engine[1479]: I20250430 13:04:34.801182 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:04:34.801792 update_engine[1479]: I20250430 13:04:34.801620 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:04:34.802162 update_engine[1479]: E20250430 13:04:34.802084 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:04:34.802267 update_engine[1479]: I20250430 13:04:34.802222 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 13:04:44.810211 update_engine[1479]: I20250430 13:04:44.809912 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:04:44.810764 update_engine[1479]: I20250430 13:04:44.810431 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:04:44.810964 update_engine[1479]: I20250430 13:04:44.810856 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:04:44.811401 update_engine[1479]: E20250430 13:04:44.811331 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:04:44.811480 update_engine[1479]: I20250430 13:04:44.811450 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 13:04:54.808936 update_engine[1479]: I20250430 13:04:54.808733 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:04:54.809478 update_engine[1479]: I20250430 13:04:54.809233 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:04:54.809937 update_engine[1479]: I20250430 13:04:54.809824 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:04:54.810280 update_engine[1479]: E20250430 13:04:54.810102 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:04:54.810280 update_engine[1479]: I20250430 13:04:54.810254 1479 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 13:04:54.810280 update_engine[1479]: I20250430 13:04:54.810273 1479 omaha_request_action.cc:617] Omaha request response: Apr 30 13:04:54.810551 update_engine[1479]: E20250430 13:04:54.810411 1479 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 13:04:54.810551 update_engine[1479]: I20250430 13:04:54.810446 1479 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 13:04:54.810551 update_engine[1479]: I20250430 13:04:54.810457 1479 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:04:54.810551 update_engine[1479]: I20250430 13:04:54.810467 1479 update_attempter.cc:306] Processing Done. Apr 30 13:04:54.810551 update_engine[1479]: E20250430 13:04:54.810489 1479 update_attempter.cc:619] Update failed. Apr 30 13:04:54.810551 update_engine[1479]: I20250430 13:04:54.810500 1479 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 13:04:54.810551 update_engine[1479]: I20250430 13:04:54.810510 1479 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 13:04:54.810551 update_engine[1479]: I20250430 13:04:54.810521 1479 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 13:04:54.811351 update_engine[1479]: I20250430 13:04:54.810639 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 13:04:54.811351 update_engine[1479]: I20250430 13:04:54.810678 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 13:04:54.811351 update_engine[1479]: I20250430 13:04:54.810688 1479 omaha_request_action.cc:272] Request: Apr 30 13:04:54.811351 update_engine[1479]: Apr 30 13:04:54.811351 update_engine[1479]: Apr 30 13:04:54.811351 update_engine[1479]: Apr 30 13:04:54.811351 update_engine[1479]: Apr 30 13:04:54.811351 update_engine[1479]: Apr 30 13:04:54.811351 update_engine[1479]: Apr 30 13:04:54.811351 update_engine[1479]: I20250430 13:04:54.810721 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:04:54.811351 update_engine[1479]: I20250430 13:04:54.811000 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.811406 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:04:54.812369 update_engine[1479]: E20250430 13:04:54.811980 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812121 1479 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812142 1479 omaha_request_action.cc:617] Omaha request response: Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812155 1479 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812165 1479 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812175 1479 update_attempter.cc:306] Processing Done. Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812186 1479 update_attempter.cc:310] Error event sent. Apr 30 13:04:54.812369 update_engine[1479]: I20250430 13:04:54.812203 1479 update_check_scheduler.cc:74] Next update check in 49m20s Apr 30 13:04:54.812797 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 13:04:54.812797 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 13:05:40.575414 systemd[1]: Started sshd@7-91.99.82.124:22-139.178.89.65:37140.service - OpenSSH per-connection server daemon (139.178.89.65:37140). Apr 30 13:05:41.556834 sshd[4258]: Accepted publickey for core from 139.178.89.65 port 37140 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:05:41.559279 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:05:41.565785 systemd-logind[1477]: New session 8 of user core. Apr 30 13:05:41.575480 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 13:05:42.331583 sshd[4260]: Connection closed by 139.178.89.65 port 37140 Apr 30 13:05:42.331400 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Apr 30 13:05:42.336983 systemd[1]: sshd@7-91.99.82.124:22-139.178.89.65:37140.service: Deactivated successfully. Apr 30 13:05:42.339599 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 13:05:42.340770 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Apr 30 13:05:42.342494 systemd-logind[1477]: Removed session 8. Apr 30 13:05:47.513357 systemd[1]: Started sshd@8-91.99.82.124:22-139.178.89.65:56908.service - OpenSSH per-connection server daemon (139.178.89.65:56908). Apr 30 13:05:48.500294 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 56908 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:05:48.502660 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:05:48.510139 systemd-logind[1477]: New session 9 of user core. Apr 30 13:05:48.513440 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 13:05:49.266167 sshd[4277]: Connection closed by 139.178.89.65 port 56908 Apr 30 13:05:49.267835 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Apr 30 13:05:49.276420 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Apr 30 13:05:49.277392 systemd[1]: sshd@8-91.99.82.124:22-139.178.89.65:56908.service: Deactivated successfully. Apr 30 13:05:49.282560 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 13:05:49.286963 systemd-logind[1477]: Removed session 9. Apr 30 13:05:54.446628 systemd[1]: Started sshd@9-91.99.82.124:22-139.178.89.65:56916.service - OpenSSH per-connection server daemon (139.178.89.65:56916). Apr 30 13:05:55.425007 sshd[4290]: Accepted publickey for core from 139.178.89.65 port 56916 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:05:55.426231 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:05:55.435114 systemd-logind[1477]: New session 10 of user core. Apr 30 13:05:55.438654 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 13:05:56.179467 sshd[4292]: Connection closed by 139.178.89.65 port 56916 Apr 30 13:05:56.180523 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Apr 30 13:05:56.185420 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Apr 30 13:05:56.186553 systemd[1]: sshd@9-91.99.82.124:22-139.178.89.65:56916.service: Deactivated successfully. Apr 30 13:05:56.189245 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 13:05:56.192573 systemd-logind[1477]: Removed session 10. Apr 30 13:05:56.360576 systemd[1]: Started sshd@10-91.99.82.124:22-139.178.89.65:56928.service - OpenSSH per-connection server daemon (139.178.89.65:56928). Apr 30 13:05:57.358337 sshd[4304]: Accepted publickey for core from 139.178.89.65 port 56928 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:05:57.360730 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:05:57.366353 systemd-logind[1477]: New session 11 of user core. Apr 30 13:05:57.375365 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 13:05:58.250778 sshd[4306]: Connection closed by 139.178.89.65 port 56928 Apr 30 13:05:58.252378 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Apr 30 13:05:58.258738 systemd[1]: sshd@10-91.99.82.124:22-139.178.89.65:56928.service: Deactivated successfully. Apr 30 13:05:58.258939 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Apr 30 13:05:58.262500 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 13:05:58.264993 systemd-logind[1477]: Removed session 11. Apr 30 13:05:58.434589 systemd[1]: Started sshd@11-91.99.82.124:22-139.178.89.65:45466.service - OpenSSH per-connection server daemon (139.178.89.65:45466). Apr 30 13:05:59.431801 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 45466 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:05:59.434274 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:05:59.441011 systemd-logind[1477]: New session 12 of user core. Apr 30 13:05:59.446194 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 13:06:00.201074 sshd[4318]: Connection closed by 139.178.89.65 port 45466 Apr 30 13:06:00.201707 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:00.206698 systemd[1]: sshd@11-91.99.82.124:22-139.178.89.65:45466.service: Deactivated successfully. Apr 30 13:06:00.211509 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 13:06:00.212625 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Apr 30 13:06:00.214158 systemd-logind[1477]: Removed session 12. Apr 30 13:06:05.382358 systemd[1]: Started sshd@12-91.99.82.124:22-139.178.89.65:45468.service - OpenSSH per-connection server daemon (139.178.89.65:45468). Apr 30 13:06:06.380410 sshd[4331]: Accepted publickey for core from 139.178.89.65 port 45468 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:06.381754 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:06.388230 systemd-logind[1477]: New session 13 of user core. Apr 30 13:06:06.397308 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 13:06:07.139712 sshd[4333]: Connection closed by 139.178.89.65 port 45468 Apr 30 13:06:07.140674 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:07.146963 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Apr 30 13:06:07.147535 systemd[1]: sshd@12-91.99.82.124:22-139.178.89.65:45468.service: Deactivated successfully. Apr 30 13:06:07.150462 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 13:06:07.151852 systemd-logind[1477]: Removed session 13. Apr 30 13:06:12.313456 systemd[1]: Started sshd@13-91.99.82.124:22-139.178.89.65:44378.service - OpenSSH per-connection server daemon (139.178.89.65:44378). Apr 30 13:06:13.299199 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 44378 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:13.302292 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:13.309337 systemd-logind[1477]: New session 14 of user core. Apr 30 13:06:13.315247 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 13:06:14.062443 sshd[4346]: Connection closed by 139.178.89.65 port 44378 Apr 30 13:06:14.061803 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:14.067491 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Apr 30 13:06:14.068447 systemd[1]: sshd@13-91.99.82.124:22-139.178.89.65:44378.service: Deactivated successfully. Apr 30 13:06:14.071378 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 13:06:14.072576 systemd-logind[1477]: Removed session 14. Apr 30 13:06:14.236405 systemd[1]: Started sshd@14-91.99.82.124:22-139.178.89.65:44388.service - OpenSSH per-connection server daemon (139.178.89.65:44388). Apr 30 13:06:15.214659 sshd[4357]: Accepted publickey for core from 139.178.89.65 port 44388 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:15.217183 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:15.222752 systemd-logind[1477]: New session 15 of user core. Apr 30 13:06:15.233371 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 13:06:16.021567 sshd[4359]: Connection closed by 139.178.89.65 port 44388 Apr 30 13:06:16.021384 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:16.027627 systemd[1]: sshd@14-91.99.82.124:22-139.178.89.65:44388.service: Deactivated successfully. Apr 30 13:06:16.032535 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 13:06:16.033626 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Apr 30 13:06:16.034847 systemd-logind[1477]: Removed session 15. Apr 30 13:06:16.200603 systemd[1]: Started sshd@15-91.99.82.124:22-139.178.89.65:44390.service - OpenSSH per-connection server daemon (139.178.89.65:44390). Apr 30 13:06:17.205250 sshd[4371]: Accepted publickey for core from 139.178.89.65 port 44390 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:17.208086 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:17.213771 systemd-logind[1477]: New session 16 of user core. Apr 30 13:06:17.218269 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 13:06:19.287475 sshd[4373]: Connection closed by 139.178.89.65 port 44390 Apr 30 13:06:19.288441 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:19.295228 systemd[1]: sshd@15-91.99.82.124:22-139.178.89.65:44390.service: Deactivated successfully. Apr 30 13:06:19.299694 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 13:06:19.300662 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Apr 30 13:06:19.302106 systemd-logind[1477]: Removed session 16. Apr 30 13:06:19.471506 systemd[1]: Started sshd@16-91.99.82.124:22-139.178.89.65:48914.service - OpenSSH per-connection server daemon (139.178.89.65:48914). Apr 30 13:06:20.459351 sshd[4390]: Accepted publickey for core from 139.178.89.65 port 48914 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:20.460909 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:20.467065 systemd-logind[1477]: New session 17 of user core. Apr 30 13:06:20.473470 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 13:06:21.360409 sshd[4392]: Connection closed by 139.178.89.65 port 48914 Apr 30 13:06:21.360237 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:21.365503 systemd[1]: sshd@16-91.99.82.124:22-139.178.89.65:48914.service: Deactivated successfully. Apr 30 13:06:21.368671 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 13:06:21.373320 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Apr 30 13:06:21.375031 systemd-logind[1477]: Removed session 17. Apr 30 13:06:21.537468 systemd[1]: Started sshd@17-91.99.82.124:22-139.178.89.65:48926.service - OpenSSH per-connection server daemon (139.178.89.65:48926). Apr 30 13:06:22.524618 sshd[4402]: Accepted publickey for core from 139.178.89.65 port 48926 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:22.527166 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:22.533179 systemd-logind[1477]: New session 18 of user core. Apr 30 13:06:22.541274 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 13:06:23.297567 sshd[4404]: Connection closed by 139.178.89.65 port 48926 Apr 30 13:06:23.297416 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:23.304802 systemd[1]: sshd@17-91.99.82.124:22-139.178.89.65:48926.service: Deactivated successfully. Apr 30 13:06:23.307986 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 13:06:23.309725 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Apr 30 13:06:23.311733 systemd-logind[1477]: Removed session 18. Apr 30 13:06:28.477427 systemd[1]: Started sshd@18-91.99.82.124:22-139.178.89.65:57968.service - OpenSSH per-connection server daemon (139.178.89.65:57968). Apr 30 13:06:29.462384 sshd[4418]: Accepted publickey for core from 139.178.89.65 port 57968 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:29.464538 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:29.470908 systemd-logind[1477]: New session 19 of user core. Apr 30 13:06:29.475332 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 13:06:30.214334 sshd[4420]: Connection closed by 139.178.89.65 port 57968 Apr 30 13:06:30.215261 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:30.221806 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Apr 30 13:06:30.222304 systemd[1]: sshd@18-91.99.82.124:22-139.178.89.65:57968.service: Deactivated successfully. Apr 30 13:06:30.225835 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 13:06:30.227176 systemd-logind[1477]: Removed session 19. Apr 30 13:06:35.395416 systemd[1]: Started sshd@19-91.99.82.124:22-139.178.89.65:57984.service - OpenSSH per-connection server daemon (139.178.89.65:57984). Apr 30 13:06:36.398814 sshd[4435]: Accepted publickey for core from 139.178.89.65 port 57984 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:36.401625 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:36.407322 systemd-logind[1477]: New session 20 of user core. Apr 30 13:06:36.412292 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 13:06:37.157397 sshd[4437]: Connection closed by 139.178.89.65 port 57984 Apr 30 13:06:37.159121 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:37.164589 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Apr 30 13:06:37.165162 systemd[1]: sshd@19-91.99.82.124:22-139.178.89.65:57984.service: Deactivated successfully. Apr 30 13:06:37.167943 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 13:06:37.169522 systemd-logind[1477]: Removed session 20. Apr 30 13:06:37.338482 systemd[1]: Started sshd@20-91.99.82.124:22-139.178.89.65:44226.service - OpenSSH per-connection server daemon (139.178.89.65:44226). Apr 30 13:06:38.338348 sshd[4448]: Accepted publickey for core from 139.178.89.65 port 44226 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:38.340574 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:38.346546 systemd-logind[1477]: New session 21 of user core. Apr 30 13:06:38.355422 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 13:06:40.885047 containerd[1503]: time="2025-04-30T13:06:40.884879902Z" level=info msg="StopContainer for \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\" with timeout 30 (s)" Apr 30 13:06:40.888563 containerd[1503]: time="2025-04-30T13:06:40.888208416Z" level=info msg="Stop container \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\" with signal terminated" Apr 30 13:06:40.916252 systemd[1]: cri-containerd-0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb.scope: Deactivated successfully. Apr 30 13:06:40.917360 containerd[1503]: time="2025-04-30T13:06:40.917121878Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:06:40.931481 containerd[1503]: time="2025-04-30T13:06:40.931432941Z" level=info msg="StopContainer for \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\" with timeout 2 (s)" Apr 30 13:06:40.932151 containerd[1503]: time="2025-04-30T13:06:40.931845401Z" level=info msg="Stop container \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\" with signal terminated" Apr 30 13:06:40.943455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb-rootfs.mount: Deactivated successfully. Apr 30 13:06:40.948934 systemd-networkd[1378]: lxc_health: Link DOWN Apr 30 13:06:40.948940 systemd-networkd[1378]: lxc_health: Lost carrier Apr 30 13:06:40.966138 systemd[1]: cri-containerd-d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6.scope: Deactivated successfully. Apr 30 13:06:40.966466 systemd[1]: cri-containerd-d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6.scope: Consumed 8.264s CPU time, 124M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 13:06:40.976471 containerd[1503]: time="2025-04-30T13:06:40.976060291Z" level=info msg="shim disconnected" id=0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb namespace=k8s.io Apr 30 13:06:40.976471 containerd[1503]: time="2025-04-30T13:06:40.976221859Z" level=warning msg="cleaning up after shim disconnected" id=0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb namespace=k8s.io Apr 30 13:06:40.976471 containerd[1503]: time="2025-04-30T13:06:40.976233379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:41.007375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6-rootfs.mount: Deactivated successfully. Apr 30 13:06:41.012094 containerd[1503]: time="2025-04-30T13:06:41.011725826Z" level=info msg="StopContainer for \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\" returns successfully" Apr 30 13:06:41.013143 containerd[1503]: time="2025-04-30T13:06:41.012929642Z" level=info msg="StopPodSandbox for \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\"" Apr 30 13:06:41.013143 containerd[1503]: time="2025-04-30T13:06:41.012976404Z" level=info msg="Container to stop \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:06:41.016636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a-shm.mount: Deactivated successfully. Apr 30 13:06:41.019793 containerd[1503]: time="2025-04-30T13:06:41.018959282Z" level=info msg="shim disconnected" id=d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6 namespace=k8s.io Apr 30 13:06:41.019793 containerd[1503]: time="2025-04-30T13:06:41.019608552Z" level=warning msg="cleaning up after shim disconnected" id=d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6 namespace=k8s.io Apr 30 13:06:41.019793 containerd[1503]: time="2025-04-30T13:06:41.019623913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:41.026862 systemd[1]: cri-containerd-026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a.scope: Deactivated successfully. Apr 30 13:06:41.034601 containerd[1503]: time="2025-04-30T13:06:41.034549566Z" level=warning msg="cleanup warnings time=\"2025-04-30T13:06:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 13:06:41.041891 containerd[1503]: time="2025-04-30T13:06:41.041475408Z" level=info msg="StopContainer for \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\" returns successfully" Apr 30 13:06:41.042379 containerd[1503]: time="2025-04-30T13:06:41.042117717Z" level=info msg="StopPodSandbox for \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\"" Apr 30 13:06:41.042379 containerd[1503]: time="2025-04-30T13:06:41.042157359Z" level=info msg="Container to stop \"37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:06:41.042379 containerd[1503]: time="2025-04-30T13:06:41.042168920Z" level=info msg="Container to stop \"ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:06:41.042379 containerd[1503]: time="2025-04-30T13:06:41.042179000Z" level=info msg="Container to stop \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:06:41.042379 containerd[1503]: time="2025-04-30T13:06:41.042188161Z" level=info msg="Container to stop \"57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:06:41.042379 containerd[1503]: time="2025-04-30T13:06:41.042196001Z" level=info msg="Container to stop \"6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:06:41.045224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad-shm.mount: Deactivated successfully. Apr 30 13:06:41.064915 systemd[1]: cri-containerd-3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad.scope: Deactivated successfully. Apr 30 13:06:41.081138 containerd[1503]: time="2025-04-30T13:06:41.080775312Z" level=info msg="shim disconnected" id=026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a namespace=k8s.io Apr 30 13:06:41.082139 containerd[1503]: time="2025-04-30T13:06:41.082094133Z" level=warning msg="cleaning up after shim disconnected" id=026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a namespace=k8s.io Apr 30 13:06:41.082139 containerd[1503]: time="2025-04-30T13:06:41.082127215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:41.097964 containerd[1503]: time="2025-04-30T13:06:41.097892827Z" level=info msg="shim disconnected" id=3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad namespace=k8s.io Apr 30 13:06:41.097964 containerd[1503]: time="2025-04-30T13:06:41.097957030Z" level=warning msg="cleaning up after shim disconnected" id=3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad namespace=k8s.io Apr 30 13:06:41.097964 containerd[1503]: time="2025-04-30T13:06:41.097967190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:41.105339 containerd[1503]: time="2025-04-30T13:06:41.105294291Z" level=info msg="TearDown network for sandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" successfully" Apr 30 13:06:41.105339 containerd[1503]: time="2025-04-30T13:06:41.105329092Z" level=info msg="StopPodSandbox for \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" returns successfully" Apr 30 13:06:41.119171 containerd[1503]: time="2025-04-30T13:06:41.117812152Z" level=warning msg="cleanup warnings time=\"2025-04-30T13:06:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 13:06:41.120982 containerd[1503]: time="2025-04-30T13:06:41.120945577Z" level=info msg="TearDown network for sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" successfully" Apr 30 13:06:41.121156 containerd[1503]: time="2025-04-30T13:06:41.121139746Z" level=info msg="StopPodSandbox for \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" returns successfully" Apr 30 13:06:41.213588 kubelet[2839]: I0430 13:06:41.213525 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/100e574d-198a-4b95-a363-bc8b3a576912-cilium-config-path\") pod \"100e574d-198a-4b95-a363-bc8b3a576912\" (UID: \"100e574d-198a-4b95-a363-bc8b3a576912\") " Apr 30 13:06:41.213588 kubelet[2839]: I0430 13:06:41.213594 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-net\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.215732 kubelet[2839]: I0430 13:06:41.213631 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-run\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.215732 kubelet[2839]: I0430 13:06:41.213727 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svf74\" (UniqueName: \"kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-kube-api-access-svf74\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.215732 kubelet[2839]: I0430 13:06:41.213773 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b30ccadd-3850-43d9-83ca-8074998b853e-clustermesh-secrets\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.215732 kubelet[2839]: I0430 13:06:41.213866 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-hubble-tls\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.215732 kubelet[2839]: I0430 13:06:41.213898 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-hostproc\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.215732 kubelet[2839]: I0430 13:06:41.213932 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-bpf-maps\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.216764 kubelet[2839]: I0430 13:06:41.214002 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-lib-modules\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.216764 kubelet[2839]: I0430 13:06:41.214058 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-xtables-lock\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.216764 kubelet[2839]: I0430 13:06:41.214109 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-kernel\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.216764 kubelet[2839]: I0430 13:06:41.214188 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-config-path\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.216764 kubelet[2839]: I0430 13:06:41.214223 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-etc-cni-netd\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.216764 kubelet[2839]: I0430 13:06:41.214260 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4z4q\" (UniqueName: \"kubernetes.io/projected/100e574d-198a-4b95-a363-bc8b3a576912-kube-api-access-g4z4q\") pod \"100e574d-198a-4b95-a363-bc8b3a576912\" (UID: \"100e574d-198a-4b95-a363-bc8b3a576912\") " Apr 30 13:06:41.218352 kubelet[2839]: I0430 13:06:41.214291 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-cgroup\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.218352 kubelet[2839]: I0430 13:06:41.214321 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cni-path\") pod \"b30ccadd-3850-43d9-83ca-8074998b853e\" (UID: \"b30ccadd-3850-43d9-83ca-8074998b853e\") " Apr 30 13:06:41.218352 kubelet[2839]: I0430 13:06:41.214441 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cni-path" (OuterVolumeSpecName: "cni-path") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.218352 kubelet[2839]: I0430 13:06:41.215203 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.218352 kubelet[2839]: I0430 13:06:41.215275 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.219317 kubelet[2839]: I0430 13:06:41.215311 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.219317 kubelet[2839]: I0430 13:06:41.216049 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.219317 kubelet[2839]: I0430 13:06:41.216109 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.219317 kubelet[2839]: I0430 13:06:41.216142 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.220128 kubelet[2839]: I0430 13:06:41.219647 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 13:06:41.220128 kubelet[2839]: I0430 13:06:41.219731 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.220128 kubelet[2839]: I0430 13:06:41.219990 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.220838 kubelet[2839]: I0430 13:06:41.220646 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-hostproc" (OuterVolumeSpecName: "hostproc") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:06:41.225345 kubelet[2839]: I0430 13:06:41.225315 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b30ccadd-3850-43d9-83ca-8074998b853e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 13:06:41.226557 kubelet[2839]: I0430 13:06:41.226527 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-kube-api-access-svf74" (OuterVolumeSpecName: "kube-api-access-svf74") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "kube-api-access-svf74". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 13:06:41.227176 kubelet[2839]: I0430 13:06:41.226911 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100e574d-198a-4b95-a363-bc8b3a576912-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "100e574d-198a-4b95-a363-bc8b3a576912" (UID: "100e574d-198a-4b95-a363-bc8b3a576912"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 13:06:41.227305 kubelet[2839]: I0430 13:06:41.227275 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b30ccadd-3850-43d9-83ca-8074998b853e" (UID: "b30ccadd-3850-43d9-83ca-8074998b853e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 13:06:41.227845 kubelet[2839]: I0430 13:06:41.227745 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100e574d-198a-4b95-a363-bc8b3a576912-kube-api-access-g4z4q" (OuterVolumeSpecName: "kube-api-access-g4z4q") pod "100e574d-198a-4b95-a363-bc8b3a576912" (UID: "100e574d-198a-4b95-a363-bc8b3a576912"). InnerVolumeSpecName "kube-api-access-g4z4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 13:06:41.272418 kubelet[2839]: E0430 13:06:41.272363 2839 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 13:06:41.314864 kubelet[2839]: I0430 13:06:41.314817 2839 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-net\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.314864 kubelet[2839]: I0430 13:06:41.314861 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/100e574d-198a-4b95-a363-bc8b3a576912-cilium-config-path\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.314864 kubelet[2839]: I0430 13:06:41.314877 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-run\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314891 2839 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-svf74\" (UniqueName: \"kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-kube-api-access-svf74\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314905 2839 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b30ccadd-3850-43d9-83ca-8074998b853e-clustermesh-secrets\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314916 2839 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b30ccadd-3850-43d9-83ca-8074998b853e-hubble-tls\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314926 2839 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-hostproc\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314937 2839 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-bpf-maps\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314949 2839 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-lib-modules\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314960 2839 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-xtables-lock\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315162 kubelet[2839]: I0430 13:06:41.314970 2839 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-host-proc-sys-kernel\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315482 kubelet[2839]: I0430 13:06:41.314980 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-config-path\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315482 kubelet[2839]: I0430 13:06:41.314991 2839 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-etc-cni-netd\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315482 kubelet[2839]: I0430 13:06:41.315005 2839 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g4z4q\" (UniqueName: \"kubernetes.io/projected/100e574d-198a-4b95-a363-bc8b3a576912-kube-api-access-g4z4q\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315482 kubelet[2839]: I0430 13:06:41.315038 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cilium-cgroup\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.315482 kubelet[2839]: I0430 13:06:41.315049 2839 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b30ccadd-3850-43d9-83ca-8074998b853e-cni-path\") on node \"ci-4230-1-1-f-bd31e1b44e\" DevicePath \"\"" Apr 30 13:06:41.877280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a-rootfs.mount: Deactivated successfully. Apr 30 13:06:41.877418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad-rootfs.mount: Deactivated successfully. Apr 30 13:06:41.877471 systemd[1]: var-lib-kubelet-pods-b30ccadd\x2d3850\x2d43d9\x2d83ca\x2d8074998b853e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 13:06:41.877533 systemd[1]: var-lib-kubelet-pods-b30ccadd\x2d3850\x2d43d9\x2d83ca\x2d8074998b853e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 13:06:41.877596 systemd[1]: var-lib-kubelet-pods-100e574d\x2d198a\x2d4b95\x2da363\x2dbc8b3a576912-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4z4q.mount: Deactivated successfully. Apr 30 13:06:41.877653 systemd[1]: var-lib-kubelet-pods-b30ccadd\x2d3850\x2d43d9\x2d83ca\x2d8074998b853e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsvf74.mount: Deactivated successfully. Apr 30 13:06:42.066950 kubelet[2839]: I0430 13:06:42.066918 2839 scope.go:117] "RemoveContainer" containerID="0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb" Apr 30 13:06:42.069673 containerd[1503]: time="2025-04-30T13:06:42.069281730Z" level=info msg="RemoveContainer for \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\"" Apr 30 13:06:42.081053 containerd[1503]: time="2025-04-30T13:06:42.080784945Z" level=info msg="RemoveContainer for \"0280f36ce75780011367104d32355f1a9499bed05262421056e48278922b59cb\" returns successfully" Apr 30 13:06:42.081321 systemd[1]: Removed slice kubepods-besteffort-pod100e574d_198a_4b95_a363_bc8b3a576912.slice - libcontainer container kubepods-besteffort-pod100e574d_198a_4b95_a363_bc8b3a576912.slice. Apr 30 13:06:42.085243 systemd[1]: Removed slice kubepods-burstable-podb30ccadd_3850_43d9_83ca_8074998b853e.slice - libcontainer container kubepods-burstable-podb30ccadd_3850_43d9_83ca_8074998b853e.slice. Apr 30 13:06:42.085757 kubelet[2839]: I0430 13:06:42.085724 2839 scope.go:117] "RemoveContainer" containerID="d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6" Apr 30 13:06:42.085961 systemd[1]: kubepods-burstable-podb30ccadd_3850_43d9_83ca_8074998b853e.slice: Consumed 8.357s CPU time, 124.4M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 13:06:42.091852 containerd[1503]: time="2025-04-30T13:06:42.091668371Z" level=info msg="RemoveContainer for \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\"" Apr 30 13:06:42.095956 containerd[1503]: time="2025-04-30T13:06:42.095812603Z" level=info msg="RemoveContainer for \"d1bdb872f677740557ee904f6e25f6310a98b25cee3ecd4b8e8b4e0d7de3a9e6\" returns successfully" Apr 30 13:06:42.096169 kubelet[2839]: I0430 13:06:42.096148 2839 scope.go:117] "RemoveContainer" containerID="ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a" Apr 30 13:06:42.098535 containerd[1503]: time="2025-04-30T13:06:42.098493208Z" level=info msg="RemoveContainer for \"ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a\"" Apr 30 13:06:42.101306 containerd[1503]: time="2025-04-30T13:06:42.101270217Z" level=info msg="RemoveContainer for \"ed33cf4392c54f73b83456331ed097ca4c26fec52839ee37acb60dbcd9a7338a\" returns successfully" Apr 30 13:06:42.101470 kubelet[2839]: I0430 13:06:42.101447 2839 scope.go:117] "RemoveContainer" containerID="6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d" Apr 30 13:06:42.103029 containerd[1503]: time="2025-04-30T13:06:42.102955055Z" level=info msg="RemoveContainer for \"6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d\"" Apr 30 13:06:42.105520 containerd[1503]: time="2025-04-30T13:06:42.105472452Z" level=info msg="RemoveContainer for \"6fddddaf246c03b1e2804b52ef88e5e021bdb3990bcbacba154bb6d3c918be5d\" returns successfully" Apr 30 13:06:42.105672 kubelet[2839]: I0430 13:06:42.105651 2839 scope.go:117] "RemoveContainer" containerID="37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9" Apr 30 13:06:42.108061 containerd[1503]: time="2025-04-30T13:06:42.106770433Z" level=info msg="RemoveContainer for \"37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9\"" Apr 30 13:06:42.114301 containerd[1503]: time="2025-04-30T13:06:42.113771038Z" level=info msg="RemoveContainer for \"37c0dce39d8572be8274e95eb9d03423183a5b17b65f43f3e773b0b8cc37f9a9\" returns successfully" Apr 30 13:06:42.114426 kubelet[2839]: I0430 13:06:42.114247 2839 scope.go:117] "RemoveContainer" containerID="57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134" Apr 30 13:06:42.116555 containerd[1503]: time="2025-04-30T13:06:42.116512445Z" level=info msg="RemoveContainer for \"57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134\"" Apr 30 13:06:42.119932 containerd[1503]: time="2025-04-30T13:06:42.119889562Z" level=info msg="RemoveContainer for \"57e2dbfc433cd2bd6bced76d6a06a0cdcef8f3e0c9a56c16c24ee8903599f134\" returns successfully" Apr 30 13:06:42.959096 sshd[4450]: Connection closed by 139.178.89.65 port 44226 Apr 30 13:06:42.960135 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:42.964962 systemd[1]: sshd@20-91.99.82.124:22-139.178.89.65:44226.service: Deactivated successfully. Apr 30 13:06:42.968214 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 13:06:42.968576 systemd[1]: session-21.scope: Consumed 1.377s CPU time, 23.6M memory peak. Apr 30 13:06:42.969878 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. Apr 30 13:06:42.971613 systemd-logind[1477]: Removed session 21. Apr 30 13:06:43.069996 kubelet[2839]: I0430 13:06:43.069954 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100e574d-198a-4b95-a363-bc8b3a576912" path="/var/lib/kubelet/pods/100e574d-198a-4b95-a363-bc8b3a576912/volumes" Apr 30 13:06:43.070907 kubelet[2839]: I0430 13:06:43.070873 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" path="/var/lib/kubelet/pods/b30ccadd-3850-43d9-83ca-8074998b853e/volumes" Apr 30 13:06:43.133530 systemd[1]: Started sshd@21-91.99.82.124:22-139.178.89.65:44238.service - OpenSSH per-connection server daemon (139.178.89.65:44238). Apr 30 13:06:44.115715 sshd[4616]: Accepted publickey for core from 139.178.89.65 port 44238 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:44.117937 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:44.124264 systemd-logind[1477]: New session 22 of user core. Apr 30 13:06:44.129238 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 13:06:45.507152 kubelet[2839]: I0430 13:06:45.505898 2839 topology_manager.go:215] "Topology Admit Handler" podUID="67606d6d-aca6-4c66-945e-0591c74d3cda" podNamespace="kube-system" podName="cilium-rxpzs" Apr 30 13:06:45.507152 kubelet[2839]: E0430 13:06:45.505966 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" containerName="cilium-agent" Apr 30 13:06:45.507152 kubelet[2839]: E0430 13:06:45.505975 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" containerName="mount-cgroup" Apr 30 13:06:45.507152 kubelet[2839]: E0430 13:06:45.505981 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" containerName="apply-sysctl-overwrites" Apr 30 13:06:45.507152 kubelet[2839]: E0430 13:06:45.505988 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100e574d-198a-4b95-a363-bc8b3a576912" containerName="cilium-operator" Apr 30 13:06:45.507152 kubelet[2839]: E0430 13:06:45.505993 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" containerName="clean-cilium-state" Apr 30 13:06:45.507152 kubelet[2839]: E0430 13:06:45.506000 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" containerName="mount-bpf-fs" Apr 30 13:06:45.507152 kubelet[2839]: I0430 13:06:45.506039 2839 memory_manager.go:354] "RemoveStaleState removing state" podUID="b30ccadd-3850-43d9-83ca-8074998b853e" containerName="cilium-agent" Apr 30 13:06:45.507152 kubelet[2839]: I0430 13:06:45.506047 2839 memory_manager.go:354] "RemoveStaleState removing state" podUID="100e574d-198a-4b95-a363-bc8b3a576912" containerName="cilium-operator" Apr 30 13:06:45.521526 systemd[1]: Created slice kubepods-burstable-pod67606d6d_aca6_4c66_945e_0591c74d3cda.slice - libcontainer container kubepods-burstable-pod67606d6d_aca6_4c66_945e_0591c74d3cda.slice. Apr 30 13:06:45.644874 kubelet[2839]: I0430 13:06:45.644141 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-cni-path\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.644874 kubelet[2839]: I0430 13:06:45.644221 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-lib-modules\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.644874 kubelet[2839]: I0430 13:06:45.644261 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-xtables-lock\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.644874 kubelet[2839]: I0430 13:06:45.644336 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-host-proc-sys-kernel\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.644874 kubelet[2839]: I0430 13:06:45.644394 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67606d6d-aca6-4c66-945e-0591c74d3cda-cilium-config-path\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.644874 kubelet[2839]: I0430 13:06:45.644429 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67606d6d-aca6-4c66-945e-0591c74d3cda-cilium-ipsec-secrets\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645267 kubelet[2839]: I0430 13:06:45.644460 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-cilium-run\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645267 kubelet[2839]: I0430 13:06:45.644491 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-etc-cni-netd\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645267 kubelet[2839]: I0430 13:06:45.644563 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67606d6d-aca6-4c66-945e-0591c74d3cda-hubble-tls\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645267 kubelet[2839]: I0430 13:06:45.644597 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-cilium-cgroup\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645267 kubelet[2839]: I0430 13:06:45.644633 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67606d6d-aca6-4c66-945e-0591c74d3cda-clustermesh-secrets\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645267 kubelet[2839]: I0430 13:06:45.644664 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7p4\" (UniqueName: \"kubernetes.io/projected/67606d6d-aca6-4c66-945e-0591c74d3cda-kube-api-access-4g7p4\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645474 kubelet[2839]: I0430 13:06:45.644696 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-bpf-maps\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645474 kubelet[2839]: I0430 13:06:45.644726 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-host-proc-sys-net\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.645474 kubelet[2839]: I0430 13:06:45.644757 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67606d6d-aca6-4c66-945e-0591c74d3cda-hostproc\") pod \"cilium-rxpzs\" (UID: \"67606d6d-aca6-4c66-945e-0591c74d3cda\") " pod="kube-system/cilium-rxpzs" Apr 30 13:06:45.670532 sshd[4618]: Connection closed by 139.178.89.65 port 44238 Apr 30 13:06:45.671077 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:45.678221 systemd[1]: sshd@21-91.99.82.124:22-139.178.89.65:44238.service: Deactivated successfully. Apr 30 13:06:45.682335 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 13:06:45.683708 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. Apr 30 13:06:45.685176 systemd-logind[1477]: Removed session 22. Apr 30 13:06:45.827329 containerd[1503]: time="2025-04-30T13:06:45.826610808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxpzs,Uid:67606d6d-aca6-4c66-945e-0591c74d3cda,Namespace:kube-system,Attempt:0,}" Apr 30 13:06:45.852563 systemd[1]: Started sshd@22-91.99.82.124:22-139.178.89.65:44248.service - OpenSSH per-connection server daemon (139.178.89.65:44248). Apr 30 13:06:45.861904 containerd[1503]: time="2025-04-30T13:06:45.861482233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:06:45.861904 containerd[1503]: time="2025-04-30T13:06:45.861620719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:06:45.861904 containerd[1503]: time="2025-04-30T13:06:45.861643360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:06:45.861904 containerd[1503]: time="2025-04-30T13:06:45.861785687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:06:45.887290 systemd[1]: Started cri-containerd-ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c.scope - libcontainer container ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c. Apr 30 13:06:45.917850 containerd[1503]: time="2025-04-30T13:06:45.917788496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxpzs,Uid:67606d6d-aca6-4c66-945e-0591c74d3cda,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\"" Apr 30 13:06:45.922155 containerd[1503]: time="2025-04-30T13:06:45.922086817Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 13:06:45.936137 containerd[1503]: time="2025-04-30T13:06:45.936061108Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d\"" Apr 30 13:06:45.939467 containerd[1503]: time="2025-04-30T13:06:45.939393183Z" level=info msg="StartContainer for \"65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d\"" Apr 30 13:06:45.973474 systemd[1]: Started cri-containerd-65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d.scope - libcontainer container 65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d. Apr 30 13:06:46.015261 containerd[1503]: time="2025-04-30T13:06:46.015217157Z" level=info msg="StartContainer for \"65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d\" returns successfully" Apr 30 13:06:46.028336 systemd[1]: cri-containerd-65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d.scope: Deactivated successfully. Apr 30 13:06:46.065416 containerd[1503]: time="2025-04-30T13:06:46.064830390Z" level=info msg="shim disconnected" id=65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d namespace=k8s.io Apr 30 13:06:46.065416 containerd[1503]: time="2025-04-30T13:06:46.065209848Z" level=warning msg="cleaning up after shim disconnected" id=65e9607c30b575ee9cbcc8a09bff3f0e11da39caede2e686e7c0790bcd3b4e7d namespace=k8s.io Apr 30 13:06:46.065416 containerd[1503]: time="2025-04-30T13:06:46.065225609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:46.090208 containerd[1503]: time="2025-04-30T13:06:46.089978403Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 13:06:46.102089 containerd[1503]: time="2025-04-30T13:06:46.102036365Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51\"" Apr 30 13:06:46.103230 containerd[1503]: time="2025-04-30T13:06:46.103206180Z" level=info msg="StartContainer for \"ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51\"" Apr 30 13:06:46.135334 systemd[1]: Started cri-containerd-ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51.scope - libcontainer container ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51. Apr 30 13:06:46.165430 containerd[1503]: time="2025-04-30T13:06:46.165289195Z" level=info msg="StartContainer for \"ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51\" returns successfully" Apr 30 13:06:46.176671 systemd[1]: cri-containerd-ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51.scope: Deactivated successfully. Apr 30 13:06:46.203134 containerd[1503]: time="2025-04-30T13:06:46.203059237Z" level=info msg="shim disconnected" id=ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51 namespace=k8s.io Apr 30 13:06:46.203794 containerd[1503]: time="2025-04-30T13:06:46.203465696Z" level=warning msg="cleaning up after shim disconnected" id=ab99d741549a2576a51fab6e87b3dfb7d3dd2195a3da682f0c21d9faa9d35b51 namespace=k8s.io Apr 30 13:06:46.203794 containerd[1503]: time="2025-04-30T13:06:46.203495777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:46.274634 kubelet[2839]: E0430 13:06:46.274484 2839 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 13:06:46.857816 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 44248 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:46.860746 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:46.866494 systemd-logind[1477]: New session 23 of user core. Apr 30 13:06:46.874353 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 13:06:47.105465 containerd[1503]: time="2025-04-30T13:06:47.105397682Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 13:06:47.127615 containerd[1503]: time="2025-04-30T13:06:47.127475393Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001\"" Apr 30 13:06:47.128710 containerd[1503]: time="2025-04-30T13:06:47.128671489Z" level=info msg="StartContainer for \"12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001\"" Apr 30 13:06:47.166236 systemd[1]: Started cri-containerd-12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001.scope - libcontainer container 12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001. Apr 30 13:06:47.203693 containerd[1503]: time="2025-04-30T13:06:47.203630267Z" level=info msg="StartContainer for \"12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001\" returns successfully" Apr 30 13:06:47.207557 systemd[1]: cri-containerd-12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001.scope: Deactivated successfully. Apr 30 13:06:47.238669 containerd[1503]: time="2025-04-30T13:06:47.238003792Z" level=info msg="shim disconnected" id=12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001 namespace=k8s.io Apr 30 13:06:47.238669 containerd[1503]: time="2025-04-30T13:06:47.238098636Z" level=warning msg="cleaning up after shim disconnected" id=12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001 namespace=k8s.io Apr 30 13:06:47.238669 containerd[1503]: time="2025-04-30T13:06:47.238110717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:47.546666 sshd[4803]: Connection closed by 139.178.89.65 port 44248 Apr 30 13:06:47.547798 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Apr 30 13:06:47.553138 systemd[1]: sshd@22-91.99.82.124:22-139.178.89.65:44248.service: Deactivated successfully. Apr 30 13:06:47.555422 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 13:06:47.557585 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. Apr 30 13:06:47.559925 systemd-logind[1477]: Removed session 23. Apr 30 13:06:47.722605 systemd[1]: Started sshd@23-91.99.82.124:22-139.178.89.65:53576.service - OpenSSH per-connection server daemon (139.178.89.65:53576). Apr 30 13:06:47.755399 systemd[1]: run-containerd-runc-k8s.io-12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001-runc.SK8Htq.mount: Deactivated successfully. Apr 30 13:06:47.755540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12efa9751ea195b1f03ea7478e8c3aaffb625f6b9d4e216cefd9cbafc5241001-rootfs.mount: Deactivated successfully. Apr 30 13:06:48.065471 kubelet[2839]: E0430 13:06:48.065359 2839 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-wc9gg" podUID="68719ba3-e3b6-46b1-ae10-6cd177a70c0a" Apr 30 13:06:48.105411 containerd[1503]: time="2025-04-30T13:06:48.105374560Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 13:06:48.133947 containerd[1503]: time="2025-04-30T13:06:48.133414750Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579\"" Apr 30 13:06:48.138223 containerd[1503]: time="2025-04-30T13:06:48.137221368Z" level=info msg="StartContainer for \"27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579\"" Apr 30 13:06:48.182616 systemd[1]: Started cri-containerd-27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579.scope - libcontainer container 27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579. Apr 30 13:06:48.230383 containerd[1503]: time="2025-04-30T13:06:48.230292396Z" level=info msg="StartContainer for \"27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579\" returns successfully" Apr 30 13:06:48.230651 systemd[1]: cri-containerd-27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579.scope: Deactivated successfully. Apr 30 13:06:48.259685 containerd[1503]: time="2025-04-30T13:06:48.259609045Z" level=info msg="shim disconnected" id=27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579 namespace=k8s.io Apr 30 13:06:48.259946 containerd[1503]: time="2025-04-30T13:06:48.259669328Z" level=warning msg="cleaning up after shim disconnected" id=27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579 namespace=k8s.io Apr 30 13:06:48.259946 containerd[1503]: time="2025-04-30T13:06:48.259715730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:06:48.457852 kubelet[2839]: I0430 13:06:48.457786 2839 setters.go:580] "Node became not ready" node="ci-4230-1-1-f-bd31e1b44e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T13:06:48Z","lastTransitionTime":"2025-04-30T13:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 13:06:48.711107 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 53576 ssh2: RSA SHA256:qidWeGQ/AMu2DEHjNgm4r7KCFn+EUn2ITyolPPgrSbA Apr 30 13:06:48.713082 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:06:48.719081 systemd-logind[1477]: New session 24 of user core. Apr 30 13:06:48.724299 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 13:06:48.755003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ce396c5234537c88d952296980a2fa8a62fc498197fc05d676ddb3fd760579-rootfs.mount: Deactivated successfully. Apr 30 13:06:49.112698 containerd[1503]: time="2025-04-30T13:06:49.112577335Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 13:06:49.131222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174836042.mount: Deactivated successfully. Apr 30 13:06:49.135627 containerd[1503]: time="2025-04-30T13:06:49.134630366Z" level=info msg="CreateContainer within sandbox \"ef7b7ea320dbfa609a9541c35c1802bd33bc1f0372a1c007bc904ce80d9aae3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2\"" Apr 30 13:06:49.137766 containerd[1503]: time="2025-04-30T13:06:49.136364488Z" level=info msg="StartContainer for \"e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2\"" Apr 30 13:06:49.173341 systemd[1]: Started cri-containerd-e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2.scope - libcontainer container e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2. Apr 30 13:06:49.207446 containerd[1503]: time="2025-04-30T13:06:49.207297924Z" level=info msg="StartContainer for \"e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2\" returns successfully" Apr 30 13:06:49.590071 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 13:06:50.065901 kubelet[2839]: E0430 13:06:50.065796 2839 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-wc9gg" podUID="68719ba3-e3b6-46b1-ae10-6cd177a70c0a" Apr 30 13:06:50.139172 kubelet[2839]: I0430 13:06:50.138984 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rxpzs" podStartSLOduration=5.138922365 podStartE2EDuration="5.138922365s" podCreationTimestamp="2025-04-30 13:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:06:50.136503012 +0000 UTC m=+349.220956575" watchObservedRunningTime="2025-04-30 13:06:50.138922365 +0000 UTC m=+349.223375928" Apr 30 13:06:52.556400 systemd-networkd[1378]: lxc_health: Link UP Apr 30 13:06:52.579828 systemd-networkd[1378]: lxc_health: Gained carrier Apr 30 13:06:53.976468 systemd-networkd[1378]: lxc_health: Gained IPv6LL Apr 30 13:06:55.834855 systemd[1]: run-containerd-runc-k8s.io-e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2-runc.1z4mqM.mount: Deactivated successfully. Apr 30 13:06:57.989368 systemd[1]: run-containerd-runc-k8s.io-e699257a7ba95b547f5250e8ede072d85f5f9b0a276b3d579cd1dc28e49d67c2-runc.sPG3qJ.mount: Deactivated successfully. Apr 30 13:07:01.097287 containerd[1503]: time="2025-04-30T13:07:01.097226686Z" level=info msg="StopPodSandbox for \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\"" Apr 30 13:07:01.097860 containerd[1503]: time="2025-04-30T13:07:01.097386653Z" level=info msg="TearDown network for sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" successfully" Apr 30 13:07:01.097860 containerd[1503]: time="2025-04-30T13:07:01.097408294Z" level=info msg="StopPodSandbox for \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" returns successfully" Apr 30 13:07:01.098224 containerd[1503]: time="2025-04-30T13:07:01.098182411Z" level=info msg="RemovePodSandbox for \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\"" Apr 30 13:07:01.098311 containerd[1503]: time="2025-04-30T13:07:01.098243414Z" level=info msg="Forcibly stopping sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\"" Apr 30 13:07:01.098453 containerd[1503]: time="2025-04-30T13:07:01.098345218Z" level=info msg="TearDown network for sandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" successfully" Apr 30 13:07:01.102735 containerd[1503]: time="2025-04-30T13:07:01.102686223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:07:01.102838 containerd[1503]: time="2025-04-30T13:07:01.102766507Z" level=info msg="RemovePodSandbox \"3b9fa696465c21eca30bd5f8b9639f7a89c202c6b95f02e4df29ecabf86168ad\" returns successfully" Apr 30 13:07:01.103494 containerd[1503]: time="2025-04-30T13:07:01.103294012Z" level=info msg="StopPodSandbox for \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\"" Apr 30 13:07:01.103494 containerd[1503]: time="2025-04-30T13:07:01.103433498Z" level=info msg="TearDown network for sandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" successfully" Apr 30 13:07:01.103494 containerd[1503]: time="2025-04-30T13:07:01.103447579Z" level=info msg="StopPodSandbox for \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" returns successfully" Apr 30 13:07:01.104161 containerd[1503]: time="2025-04-30T13:07:01.103830917Z" level=info msg="RemovePodSandbox for \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\"" Apr 30 13:07:01.104161 containerd[1503]: time="2025-04-30T13:07:01.103861759Z" level=info msg="Forcibly stopping sandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\"" Apr 30 13:07:01.104161 containerd[1503]: time="2025-04-30T13:07:01.103931322Z" level=info msg="TearDown network for sandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" successfully" Apr 30 13:07:01.108202 containerd[1503]: time="2025-04-30T13:07:01.108096558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:07:01.108484 containerd[1503]: time="2025-04-30T13:07:01.108365891Z" level=info msg="RemovePodSandbox \"026ff98986fb906c0a79c8ef81fa42d529c30b9b83fcd61ee1fbe0a28fc3734a\" returns successfully" Apr 30 13:07:02.532588 sshd[4926]: Connection closed by 139.178.89.65 port 53576 Apr 30 13:07:02.533503 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Apr 30 13:07:02.539172 systemd[1]: sshd@23-91.99.82.124:22-139.178.89.65:53576.service: Deactivated successfully. Apr 30 13:07:02.543952 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 13:07:02.545773 systemd-logind[1477]: Session 24 logged out. Waiting for processes to exit. Apr 30 13:07:02.546955 systemd-logind[1477]: Removed session 24. Apr 30 13:07:36.338680 systemd[1]: cri-containerd-eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9.scope: Deactivated successfully. Apr 30 13:07:36.339633 systemd[1]: cri-containerd-eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9.scope: Consumed 7.013s CPU time, 56.9M memory peak. Apr 30 13:07:36.366661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9-rootfs.mount: Deactivated successfully. Apr 30 13:07:36.373144 containerd[1503]: time="2025-04-30T13:07:36.372963086Z" level=info msg="shim disconnected" id=eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9 namespace=k8s.io Apr 30 13:07:36.373144 containerd[1503]: time="2025-04-30T13:07:36.373079924Z" level=warning msg="cleaning up after shim disconnected" id=eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9 namespace=k8s.io Apr 30 13:07:36.373144 containerd[1503]: time="2025-04-30T13:07:36.373098044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:07:36.412624 kubelet[2839]: E0430 13:07:36.412181 2839 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34946->10.0.0.2:2379: read: connection timed out" Apr 30 13:07:36.417042 systemd[1]: cri-containerd-580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba.scope: Deactivated successfully. Apr 30 13:07:36.419258 systemd[1]: cri-containerd-580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba.scope: Consumed 2.737s CPU time, 21.9M memory peak. Apr 30 13:07:36.439896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba-rootfs.mount: Deactivated successfully. Apr 30 13:07:36.446453 containerd[1503]: time="2025-04-30T13:07:36.446241205Z" level=info msg="shim disconnected" id=580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba namespace=k8s.io Apr 30 13:07:36.446453 containerd[1503]: time="2025-04-30T13:07:36.446298404Z" level=warning msg="cleaning up after shim disconnected" id=580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba namespace=k8s.io Apr 30 13:07:36.446453 containerd[1503]: time="2025-04-30T13:07:36.446306484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:07:37.243887 kubelet[2839]: I0430 13:07:37.243853 2839 scope.go:117] "RemoveContainer" containerID="580fd33ff302b667169f5110a30d2aba80c05f39e962fd0985531aa39e9779ba" Apr 30 13:07:37.246683 kubelet[2839]: I0430 13:07:37.246192 2839 scope.go:117] "RemoveContainer" containerID="eedfaee4904236dec565017e4159889a03e3c7ce41d377d6a21982adc9e769f9" Apr 30 13:07:37.246810 containerd[1503]: time="2025-04-30T13:07:37.246569291Z" level=info msg="CreateContainer within sandbox \"0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 13:07:37.249477 containerd[1503]: time="2025-04-30T13:07:37.249436337Z" level=info msg="CreateContainer within sandbox \"837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 13:07:37.267925 containerd[1503]: time="2025-04-30T13:07:37.267853524Z" level=info msg="CreateContainer within sandbox \"0bfa2d6d244f8a2311c4359eb4059c891e3b3e7317338436fcaa113ac7ef2527\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b019fb32e62e56c26ac40aa2d82753a3774b52f098c83812f8dc2f1644118254\"" Apr 30 13:07:37.268615 containerd[1503]: time="2025-04-30T13:07:37.268573315Z" level=info msg="StartContainer for \"b019fb32e62e56c26ac40aa2d82753a3774b52f098c83812f8dc2f1644118254\"" Apr 30 13:07:37.272063 containerd[1503]: time="2025-04-30T13:07:37.271421682Z" level=info msg="CreateContainer within sandbox \"837049e489cbb14f4b295971d84aaa174ac1321e8965571f08bce08ed9c9d90c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ab48c15c54399a08cb78d2a1e95d323088cd92fc14e05ceb5404b44b3d423f1e\"" Apr 30 13:07:37.272063 containerd[1503]: time="2025-04-30T13:07:37.271930196Z" level=info msg="StartContainer for \"ab48c15c54399a08cb78d2a1e95d323088cd92fc14e05ceb5404b44b3d423f1e\"" Apr 30 13:07:37.298311 systemd[1]: Started cri-containerd-b019fb32e62e56c26ac40aa2d82753a3774b52f098c83812f8dc2f1644118254.scope - libcontainer container b019fb32e62e56c26ac40aa2d82753a3774b52f098c83812f8dc2f1644118254. Apr 30 13:07:37.310210 systemd[1]: Started cri-containerd-ab48c15c54399a08cb78d2a1e95d323088cd92fc14e05ceb5404b44b3d423f1e.scope - libcontainer container ab48c15c54399a08cb78d2a1e95d323088cd92fc14e05ceb5404b44b3d423f1e. Apr 30 13:07:37.349731 containerd[1503]: time="2025-04-30T13:07:37.349668335Z" level=info msg="StartContainer for \"b019fb32e62e56c26ac40aa2d82753a3774b52f098c83812f8dc2f1644118254\" returns successfully" Apr 30 13:07:37.365439 containerd[1503]: time="2025-04-30T13:07:37.365310233Z" level=info msg="StartContainer for \"ab48c15c54399a08cb78d2a1e95d323088cd92fc14e05ceb5404b44b3d423f1e\" returns successfully" Apr 30 13:07:39.161610 kubelet[2839]: E0430 13:07:39.161130 2839 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34786->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-1-1-f-bd31e1b44e.183b1a873e281a2e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-1-1-f-bd31e1b44e,UID:f6c9a4de2476976a7786c461fc3d1c1f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-f-bd31e1b44e,},FirstTimestamp:2025-04-30 13:07:28.701921838 +0000 UTC m=+387.786375401,LastTimestamp:2025-04-30 13:07:28.701921838 +0000 UTC m=+387.786375401,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-f-bd31e1b44e,}"