Apr 30 00:55:33.881594 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:55:33.881619 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:55:33.881629 kernel: KASLR enabled Apr 30 00:55:33.881635 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 00:55:33.881640 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 30 00:55:33.881646 kernel: random: crng init done Apr 30 00:55:33.881653 kernel: ACPI: Early table checksum verification disabled Apr 30 00:55:33.881659 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 00:55:33.881665 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:55:33.881672 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881678 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881684 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881690 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881695 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881703 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881711 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881718 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881724 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:33.881730 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 00:55:33.881736 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 00:55:33.881743 kernel: NUMA: Failed to initialise from firmware Apr 30 00:55:33.881749 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:55:33.881756 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 00:55:33.881762 kernel: Zone ranges: Apr 30 00:55:33.881768 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:55:33.881775 kernel: DMA32 empty Apr 30 00:55:33.881782 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 00:55:33.881788 kernel: Movable zone start for each node Apr 30 00:55:33.881794 kernel: Early memory node ranges Apr 30 00:55:33.881801 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 30 00:55:33.881807 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 00:55:33.881813 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 00:55:33.881820 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 00:55:33.881826 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 00:55:33.881832 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 00:55:33.881838 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 00:55:33.881845 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:55:33.881852 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 00:55:33.881859 kernel: psci: probing for conduit method from ACPI. Apr 30 00:55:33.881865 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:55:33.881874 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:55:33.881881 kernel: psci: Trusted OS migration not required Apr 30 00:55:33.881887 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:55:33.881896 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:55:33.881902 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:55:33.881909 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:55:33.881916 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:55:33.881922 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:55:33.881929 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:55:33.881936 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:55:33.881942 kernel: CPU features: detected: Spectre-v4 Apr 30 00:55:33.881949 kernel: CPU features: detected: Spectre-BHB Apr 30 00:55:33.881955 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:55:33.881963 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:55:33.881970 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:55:33.881977 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:55:33.881983 kernel: alternatives: applying boot alternatives Apr 30 00:55:33.881991 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:55:33.881998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:55:33.882005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:55:33.882012 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:55:33.882018 kernel: Fallback order for Node 0: 0 Apr 30 00:55:33.882025 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 00:55:33.882032 kernel: Policy zone: Normal Apr 30 00:55:33.882040 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:55:33.882046 kernel: software IO TLB: area num 2. Apr 30 00:55:33.882053 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 00:55:33.882060 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) Apr 30 00:55:33.882067 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:55:33.882074 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:55:33.882081 kernel: rcu: RCU event tracing is enabled. Apr 30 00:55:33.882088 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:55:33.882095 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:55:33.882101 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:55:33.882108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:55:33.882116 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:55:33.882123 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:55:33.882130 kernel: GICv3: 256 SPIs implemented Apr 30 00:55:33.882136 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:55:33.882143 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:55:33.882150 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:55:33.882156 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:55:33.882163 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:55:33.882170 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:55:33.882177 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:55:33.882184 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 00:55:33.882190 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 00:55:33.882199 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:55:33.882245 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:55:33.882254 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:55:33.882283 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:55:33.882290 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:55:33.882296 kernel: Console: colour dummy device 80x25 Apr 30 00:55:33.882304 kernel: ACPI: Core revision 20230628 Apr 30 00:55:33.882311 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:55:33.882318 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:55:33.882327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:55:33.882340 kernel: landlock: Up and running. Apr 30 00:55:33.882347 kernel: SELinux: Initializing. Apr 30 00:55:33.882354 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:55:33.882362 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:55:33.882368 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:55:33.882375 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:55:33.882382 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:55:33.882389 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:55:33.882396 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:55:33.882404 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:55:33.882411 kernel: Remapping and enabling EFI services. Apr 30 00:55:33.882418 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:55:33.882425 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:55:33.882432 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:55:33.882439 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 00:55:33.882446 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:55:33.882453 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:55:33.882459 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:55:33.882466 kernel: SMP: Total of 2 processors activated. Apr 30 00:55:33.882475 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:55:33.882482 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:55:33.882494 kernel: CPU features: detected: Common not Private translations Apr 30 00:55:33.882504 kernel: CPU features: detected: CRC32 instructions Apr 30 00:55:33.882511 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:55:33.882518 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:55:33.882525 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:55:33.882533 kernel: CPU features: detected: Privileged Access Never Apr 30 00:55:33.882540 kernel: CPU features: detected: RAS Extension Support Apr 30 00:55:33.882549 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:55:33.882556 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:55:33.882563 kernel: alternatives: applying system-wide alternatives Apr 30 00:55:33.882570 kernel: devtmpfs: initialized Apr 30 00:55:33.882578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:55:33.882585 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:55:33.882592 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:55:33.882601 kernel: SMBIOS 3.0.0 present. Apr 30 00:55:33.882609 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 00:55:33.882616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:55:33.882623 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:55:33.882630 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:55:33.882638 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:55:33.882645 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:55:33.882652 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Apr 30 00:55:33.882660 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:55:33.882668 kernel: cpuidle: using governor menu Apr 30 00:55:33.882675 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:55:33.882683 kernel: ASID allocator initialised with 32768 entries Apr 30 00:55:33.882690 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:55:33.882697 kernel: Serial: AMBA PL011 UART driver Apr 30 00:55:33.882704 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:55:33.882712 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:55:33.882719 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:55:33.882726 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:55:33.882735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:55:33.882743 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:55:33.882750 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:55:33.882757 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:55:33.882764 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:55:33.882772 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:55:33.882779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:55:33.882786 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:55:33.882795 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:55:33.882805 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:55:33.882814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:55:33.882821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:55:33.882829 kernel: ACPI: Interpreter enabled Apr 30 00:55:33.882836 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:55:33.882843 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:55:33.882850 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:55:33.882857 kernel: printk: console [ttyAMA0] enabled Apr 30 00:55:33.882865 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:55:33.883021 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:55:33.883098 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:55:33.883163 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:55:33.883247 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:55:33.883351 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:55:33.883397 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:55:33.883405 kernel: PCI host bridge to bus 0000:00 Apr 30 00:55:33.883492 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:55:33.883553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:55:33.883613 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:55:33.883671 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:55:33.883755 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:55:33.883833 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 00:55:33.883904 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 00:55:33.883970 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:55:33.884041 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.884106 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 00:55:33.884178 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.888366 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 00:55:33.888490 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.888565 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 00:55:33.888640 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.888707 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 00:55:33.888779 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.888846 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 00:55:33.888942 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.889024 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 00:55:33.889099 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.889165 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 00:55:33.890295 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.890445 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 00:55:33.890524 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:55:33.890601 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 00:55:33.890678 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 00:55:33.890744 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 00:55:33.890823 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:55:33.890892 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 00:55:33.890961 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:55:33.891032 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:55:33.891106 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 00:55:33.891175 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 00:55:33.892931 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 00:55:33.893050 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 00:55:33.893123 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 00:55:33.893222 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 00:55:33.895400 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 00:55:33.895500 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 00:55:33.895571 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 00:55:33.895640 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 00:55:33.895720 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 00:55:33.895789 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 00:55:33.895862 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:55:33.895936 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:55:33.896003 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 00:55:33.896069 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 00:55:33.896137 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:55:33.896224 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 00:55:33.896312 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:55:33.896385 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:55:33.896454 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 00:55:33.896520 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 00:55:33.896586 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 00:55:33.896655 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 00:55:33.896721 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:55:33.896785 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:55:33.896858 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 00:55:33.896923 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 00:55:33.896988 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 00:55:33.897058 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 00:55:33.897124 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:55:33.897188 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:55:33.897290 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 00:55:33.897367 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:55:33.897433 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:55:33.897502 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 00:55:33.897568 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:55:33.897633 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:55:33.897700 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 00:55:33.897765 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:55:33.897829 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:55:33.897901 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 00:55:33.897965 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:55:33.898031 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:55:33.898097 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 00:55:33.898163 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:55:33.898291 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 00:55:33.898367 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:55:33.898437 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 00:55:33.898502 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:55:33.898569 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 00:55:33.898634 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:55:33.898701 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 00:55:33.898768 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:55:33.898844 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 00:55:33.898933 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:55:33.899006 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 00:55:33.899082 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:55:33.899148 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 00:55:33.899227 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:55:33.899829 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 00:55:33.899915 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:55:33.899986 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 00:55:33.900053 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 00:55:33.900121 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 00:55:33.900185 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 00:55:33.900457 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 00:55:33.900533 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 00:55:33.900600 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 00:55:33.900669 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 00:55:33.900735 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 00:55:33.900799 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 00:55:33.900865 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 00:55:33.900930 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 00:55:33.900996 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 00:55:33.901061 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 00:55:33.901134 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 00:55:33.901201 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 00:55:33.901365 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 00:55:33.901435 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 00:55:33.901501 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 00:55:33.901564 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 00:55:33.901632 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 00:55:33.901705 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 00:55:33.901772 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:55:33.901844 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 00:55:33.901909 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:55:33.901976 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 00:55:33.902039 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 00:55:33.902101 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:55:33.902172 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 00:55:33.902266 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:55:33.902354 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 00:55:33.902419 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 00:55:33.902483 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:55:33.902555 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:55:33.902623 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 00:55:33.902692 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:55:33.902756 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 00:55:33.902820 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 00:55:33.902883 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:55:33.902953 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:55:33.903018 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:55:33.903082 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 00:55:33.903164 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 00:55:33.903331 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:55:33.903414 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 00:55:33.903481 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 00:55:33.903549 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:55:33.903624 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 00:55:33.903692 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 00:55:33.903757 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:55:33.903830 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 00:55:33.903903 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 00:55:33.903969 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:55:33.904033 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 00:55:33.904098 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 00:55:33.904162 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:55:33.904249 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 00:55:33.904347 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 00:55:33.904418 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 00:55:33.904490 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:55:33.904557 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 00:55:33.904625 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 00:55:33.904691 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:55:33.904759 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:55:33.904824 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 00:55:33.904889 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 00:55:33.904954 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:55:33.905025 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:55:33.905090 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 00:55:33.905156 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 00:55:33.905232 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:55:33.905375 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:55:33.905437 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:55:33.905494 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:55:33.905562 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 00:55:33.905628 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 00:55:33.905686 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:55:33.905751 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 00:55:33.905810 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 00:55:33.905868 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:55:33.905933 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 00:55:33.905995 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 00:55:33.906055 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:55:33.906131 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 00:55:33.906191 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 00:55:33.906319 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:55:33.906393 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 00:55:33.906459 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 00:55:33.906556 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:55:33.906629 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 00:55:33.906691 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 00:55:33.906756 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:55:33.906825 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 00:55:33.906885 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 00:55:33.906944 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:55:33.907016 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 00:55:33.907078 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 00:55:33.907140 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:55:33.907221 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 00:55:33.907368 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 00:55:33.907432 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:55:33.907443 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:55:33.907450 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:55:33.907458 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:55:33.907466 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:55:33.907474 kernel: iommu: Default domain type: Translated Apr 30 00:55:33.907481 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:55:33.907493 kernel: efivars: Registered efivars operations Apr 30 00:55:33.907501 kernel: vgaarb: loaded Apr 30 00:55:33.907509 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:55:33.907516 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:55:33.907524 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:55:33.907532 kernel: pnp: PnP ACPI init Apr 30 00:55:33.907607 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:55:33.907619 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:55:33.907642 kernel: NET: Registered PF_INET protocol family Apr 30 00:55:33.907650 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:55:33.907658 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:55:33.907666 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:55:33.907674 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:55:33.907682 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:55:33.907690 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:55:33.907698 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:55:33.907706 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:55:33.907715 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:55:33.907790 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 00:55:33.907801 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:55:33.907809 kernel: kvm [1]: HYP mode not available Apr 30 00:55:33.907817 kernel: Initialise system trusted keyrings Apr 30 00:55:33.907824 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:55:33.907832 kernel: Key type asymmetric registered Apr 30 00:55:33.907839 kernel: Asymmetric key parser 'x509' registered Apr 30 00:55:33.907847 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:55:33.907857 kernel: io scheduler mq-deadline registered Apr 30 00:55:33.907866 kernel: io scheduler kyber registered Apr 30 00:55:33.907874 kernel: io scheduler bfq registered Apr 30 00:55:33.907882 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:55:33.907949 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 00:55:33.908015 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 00:55:33.908079 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.908147 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 00:55:33.908226 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 00:55:33.908364 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.908437 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 00:55:33.908502 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 00:55:33.908566 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.908636 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 00:55:33.908714 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 00:55:33.908793 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.908861 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 00:55:33.908925 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 00:55:33.908990 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.909060 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 00:55:33.909126 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 00:55:33.909191 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.909408 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 00:55:33.909485 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 00:55:33.909549 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.909621 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 00:55:33.909686 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 00:55:33.909750 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.909761 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 00:55:33.909825 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 00:55:33.909890 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 00:55:33.909954 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:55:33.909967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:55:33.909975 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:55:33.909983 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:55:33.910053 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 00:55:33.910132 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 00:55:33.910145 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:55:33.910154 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:55:33.910239 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 00:55:33.910254 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 00:55:33.910289 kernel: thunder_xcv, ver 1.0 Apr 30 00:55:33.910297 kernel: thunder_bgx, ver 1.0 Apr 30 00:55:33.910305 kernel: nicpf, ver 1.0 Apr 30 00:55:33.910312 kernel: nicvf, ver 1.0 Apr 30 00:55:33.910397 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:55:33.910461 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:55:33 UTC (1745974533) Apr 30 00:55:33.910472 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:55:33.910482 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:55:33.910490 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:55:33.910498 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:55:33.910505 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:55:33.910513 kernel: Segment Routing with IPv6 Apr 30 00:55:33.910521 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:55:33.910528 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:55:33.910537 kernel: Key type dns_resolver registered Apr 30 00:55:33.910544 kernel: registered taskstats version 1 Apr 30 00:55:33.910554 kernel: Loading compiled-in X.509 certificates Apr 30 00:55:33.910561 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:55:33.910569 kernel: Key type .fscrypt registered Apr 30 00:55:33.910577 kernel: Key type fscrypt-provisioning registered Apr 30 00:55:33.910584 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:55:33.910592 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:55:33.910600 kernel: ima: No architecture policies found Apr 30 00:55:33.910608 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:55:33.910615 kernel: clk: Disabling unused clocks Apr 30 00:55:33.910625 kernel: Freeing unused kernel memory: 39424K Apr 30 00:55:33.910632 kernel: Run /init as init process Apr 30 00:55:33.910641 kernel: with arguments: Apr 30 00:55:33.910649 kernel: /init Apr 30 00:55:33.910657 kernel: with environment: Apr 30 00:55:33.910665 kernel: HOME=/ Apr 30 00:55:33.910672 kernel: TERM=linux Apr 30 00:55:33.910679 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:55:33.910689 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:55:33.910701 systemd[1]: Detected virtualization kvm. Apr 30 00:55:33.910709 systemd[1]: Detected architecture arm64. Apr 30 00:55:33.910717 systemd[1]: Running in initrd. Apr 30 00:55:33.910725 systemd[1]: No hostname configured, using default hostname. Apr 30 00:55:33.910733 systemd[1]: Hostname set to . Apr 30 00:55:33.910741 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:55:33.910749 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:55:33.910758 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:55:33.910766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:55:33.910775 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:55:33.910783 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:55:33.910791 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:55:33.910800 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:55:33.910809 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:55:33.910819 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:55:33.910827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:55:33.910836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:55:33.910844 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:55:33.910852 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:55:33.910860 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:55:33.910868 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:55:33.910876 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:55:33.910886 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:55:33.910894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:55:33.910902 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:55:33.910911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:55:33.910919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:55:33.910927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:55:33.910935 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:55:33.910943 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:55:33.910951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:55:33.910961 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:55:33.910969 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:55:33.910977 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:55:33.910986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:55:33.910994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:33.911002 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:55:33.911010 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:55:33.911018 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:55:33.911047 systemd-journald[236]: Collecting audit messages is disabled. Apr 30 00:55:33.912312 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:55:33.912330 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:55:33.912339 kernel: Bridge firewalling registered Apr 30 00:55:33.912348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:55:33.912357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:33.912369 systemd-journald[236]: Journal started Apr 30 00:55:33.912396 systemd-journald[236]: Runtime Journal (/run/log/journal/85969b6e68c84859ab7b60557df998d9) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:55:33.884817 systemd-modules-load[237]: Inserted module 'overlay' Apr 30 00:55:33.903583 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 30 00:55:33.916989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:55:33.920294 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:55:33.920327 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:55:33.920940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:55:33.927390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:55:33.935253 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:55:33.940317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:55:33.947690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:33.951101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:55:33.960509 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:55:33.961764 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:55:33.965642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:55:33.984054 dracut-cmdline[271]: dracut-dracut-053 Apr 30 00:55:33.994407 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:55:34.006877 systemd-resolved[273]: Positive Trust Anchors: Apr 30 00:55:34.006890 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:55:34.006922 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:55:34.012547 systemd-resolved[273]: Defaulting to hostname 'linux'. Apr 30 00:55:34.014502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:55:34.015161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:55:34.088313 kernel: SCSI subsystem initialized Apr 30 00:55:34.093293 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:55:34.101297 kernel: iscsi: registered transport (tcp) Apr 30 00:55:34.114300 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:55:34.114472 kernel: QLogic iSCSI HBA Driver Apr 30 00:55:34.167031 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:55:34.176517 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:55:34.199427 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:55:34.199491 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:55:34.200326 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:55:34.252340 kernel: raid6: neonx8 gen() 15555 MB/s Apr 30 00:55:34.267313 kernel: raid6: neonx4 gen() 15477 MB/s Apr 30 00:55:34.284315 kernel: raid6: neonx2 gen() 13066 MB/s Apr 30 00:55:34.301312 kernel: raid6: neonx1 gen() 10403 MB/s Apr 30 00:55:34.318310 kernel: raid6: int64x8 gen() 6899 MB/s Apr 30 00:55:34.335326 kernel: raid6: int64x4 gen() 7306 MB/s Apr 30 00:55:34.352324 kernel: raid6: int64x2 gen() 6055 MB/s Apr 30 00:55:34.369332 kernel: raid6: int64x1 gen() 4946 MB/s Apr 30 00:55:34.369408 kernel: raid6: using algorithm neonx8 gen() 15555 MB/s Apr 30 00:55:34.386350 kernel: raid6: .... xor() 11690 MB/s, rmw enabled Apr 30 00:55:34.386436 kernel: raid6: using neon recovery algorithm Apr 30 00:55:34.391294 kernel: xor: measuring software checksum speed Apr 30 00:55:34.391342 kernel: 8regs : 19354 MB/sec Apr 30 00:55:34.391374 kernel: 32regs : 16776 MB/sec Apr 30 00:55:34.392314 kernel: arm64_neon : 26441 MB/sec Apr 30 00:55:34.392365 kernel: xor: using function: arm64_neon (26441 MB/sec) Apr 30 00:55:34.444307 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:55:34.462404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:55:34.475597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:55:34.492438 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 30 00:55:34.495925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:55:34.503825 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:55:34.519190 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 30 00:55:34.554539 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:55:34.563503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:55:34.618322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:55:34.626633 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:55:34.645315 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:55:34.648872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:55:34.650429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:55:34.651842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:55:34.659443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:55:34.686298 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:55:34.707316 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:55:34.715327 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:55:34.719277 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 00:55:34.741734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:55:34.741852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:34.743940 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:55:34.744523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:55:34.744656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:34.747407 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:34.754480 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 00:55:34.770744 kernel: ACPI: bus type USB registered Apr 30 00:55:34.770766 kernel: usbcore: registered new interface driver usbfs Apr 30 00:55:34.770777 kernel: usbcore: registered new interface driver hub Apr 30 00:55:34.770787 kernel: usbcore: registered new device driver usb Apr 30 00:55:34.770803 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 00:55:34.770934 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:55:34.770945 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:55:34.754550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:34.781361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:34.784666 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:55:34.800643 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 00:55:34.800767 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 00:55:34.800852 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 00:55:34.800967 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 00:55:34.801057 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 00:55:34.801141 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 00:55:34.801734 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 00:55:34.801867 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:55:34.801885 kernel: GPT:17805311 != 80003071 Apr 30 00:55:34.801894 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:55:34.801904 kernel: GPT:17805311 != 80003071 Apr 30 00:55:34.801913 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:55:34.801922 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:55:34.801931 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 00:55:34.802018 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:55:34.802108 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 00:55:34.802191 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 00:55:34.805173 kernel: hub 1-0:1.0: USB hub found Apr 30 00:55:34.806429 kernel: hub 1-0:1.0: 4 ports detected Apr 30 00:55:34.806551 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 00:55:34.806687 kernel: hub 2-0:1.0: USB hub found Apr 30 00:55:34.806786 kernel: hub 2-0:1.0: 4 ports detected Apr 30 00:55:34.792807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:55:34.827430 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:34.851286 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (522) Apr 30 00:55:34.853319 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (514) Apr 30 00:55:34.858636 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 00:55:34.869583 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 00:55:34.875472 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 00:55:34.876124 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 00:55:34.884706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:55:34.892522 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:55:34.900098 disk-uuid[575]: Primary Header is updated. Apr 30 00:55:34.900098 disk-uuid[575]: Secondary Entries is updated. Apr 30 00:55:34.900098 disk-uuid[575]: Secondary Header is updated. Apr 30 00:55:34.908325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:55:34.915033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:55:35.040637 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 00:55:35.284303 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 00:55:35.418387 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 00:55:35.418438 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 00:55:35.420297 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 00:55:35.473537 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 00:55:35.473911 kernel: usbcore: registered new interface driver usbhid Apr 30 00:55:35.473997 kernel: usbhid: USB HID core driver Apr 30 00:55:35.918295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:55:35.919131 disk-uuid[577]: The operation has completed successfully. Apr 30 00:55:35.969379 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:55:35.969485 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:55:35.979486 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:55:35.986668 sh[592]: Success Apr 30 00:55:36.001422 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:55:36.056398 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:55:36.058304 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:55:36.060250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:55:36.087357 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:55:36.087422 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:36.087443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:55:36.087462 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:55:36.088289 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:55:36.094287 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:55:36.095603 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:55:36.096965 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:55:36.107548 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:55:36.111077 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:55:36.122541 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:36.122589 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:36.122603 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:55:36.126303 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:55:36.126351 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:55:36.138052 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:55:36.139299 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:36.146845 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:55:36.155577 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:55:36.253294 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:55:36.258079 ignition[674]: Ignition 2.19.0 Apr 30 00:55:36.258093 ignition[674]: Stage: fetch-offline Apr 30 00:55:36.264533 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:55:36.258147 ignition[674]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:36.265922 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:55:36.258158 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:36.258379 ignition[674]: parsed url from cmdline: "" Apr 30 00:55:36.258383 ignition[674]: no config URL provided Apr 30 00:55:36.258388 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:55:36.258396 ignition[674]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:55:36.258402 ignition[674]: failed to fetch config: resource requires networking Apr 30 00:55:36.258591 ignition[674]: Ignition finished successfully Apr 30 00:55:36.288742 systemd-networkd[778]: lo: Link UP Apr 30 00:55:36.288757 systemd-networkd[778]: lo: Gained carrier Apr 30 00:55:36.291155 systemd-networkd[778]: Enumeration completed Apr 30 00:55:36.291342 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:55:36.292055 systemd[1]: Reached target network.target - Network. Apr 30 00:55:36.292248 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:36.292253 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:55:36.293360 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:36.293364 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:55:36.294786 systemd-networkd[778]: eth0: Link UP Apr 30 00:55:36.294790 systemd-networkd[778]: eth0: Gained carrier Apr 30 00:55:36.294798 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:36.300542 systemd-networkd[778]: eth1: Link UP Apr 30 00:55:36.300545 systemd-networkd[778]: eth1: Gained carrier Apr 30 00:55:36.300553 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:36.303746 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:55:36.319083 ignition[782]: Ignition 2.19.0 Apr 30 00:55:36.319304 ignition[782]: Stage: fetch Apr 30 00:55:36.320646 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:36.320659 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:36.320751 ignition[782]: parsed url from cmdline: "" Apr 30 00:55:36.320754 ignition[782]: no config URL provided Apr 30 00:55:36.320758 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:55:36.320765 ignition[782]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:55:36.320785 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 00:55:36.321541 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 00:55:36.331360 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:55:36.354391 systemd-networkd[778]: eth0: DHCPv4 address 88.198.162.73/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:55:36.521759 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 00:55:36.527202 ignition[782]: GET result: OK Apr 30 00:55:36.527310 ignition[782]: parsing config with SHA512: cb0d0435408e9d9c4add373567a74a85dee0fac20b5af7c753501dfdc741e06b515701e081f2c7b13fffd76ab4854ac18fbe3a5451f91d6e05dcade1c1e56168 Apr 30 00:55:36.533138 unknown[782]: fetched base config from "system" Apr 30 00:55:36.533147 unknown[782]: fetched base config from "system" Apr 30 00:55:36.533631 ignition[782]: fetch: fetch complete Apr 30 00:55:36.533152 unknown[782]: fetched user config from "hetzner" Apr 30 00:55:36.533636 ignition[782]: fetch: fetch passed Apr 30 00:55:36.536230 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:55:36.533691 ignition[782]: Ignition finished successfully Apr 30 00:55:36.542470 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:55:36.555381 ignition[789]: Ignition 2.19.0 Apr 30 00:55:36.555391 ignition[789]: Stage: kargs Apr 30 00:55:36.555560 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:36.555570 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:36.556528 ignition[789]: kargs: kargs passed Apr 30 00:55:36.559311 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:55:36.556581 ignition[789]: Ignition finished successfully Apr 30 00:55:36.568546 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:55:36.585602 ignition[795]: Ignition 2.19.0 Apr 30 00:55:36.585613 ignition[795]: Stage: disks Apr 30 00:55:36.585799 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:36.585811 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:36.586777 ignition[795]: disks: disks passed Apr 30 00:55:36.586829 ignition[795]: Ignition finished successfully Apr 30 00:55:36.588920 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:55:36.589796 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:55:36.590721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:55:36.591646 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:55:36.592770 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:55:36.593687 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:55:36.598489 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:55:36.618163 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 00:55:36.622329 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:55:36.629520 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:55:36.672311 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:55:36.672968 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:55:36.674303 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:55:36.681422 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:55:36.684703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:55:36.686510 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:55:36.687152 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:55:36.687181 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:55:36.698706 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) Apr 30 00:55:36.698759 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:36.698772 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:36.698782 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:55:36.703760 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:55:36.703804 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:55:36.704561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:55:36.705968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:55:36.709563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:55:36.753565 coreos-metadata[813]: Apr 30 00:55:36.753 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 00:55:36.755692 coreos-metadata[813]: Apr 30 00:55:36.755 INFO Fetch successful Apr 30 00:55:36.757607 coreos-metadata[813]: Apr 30 00:55:36.756 INFO wrote hostname ci-4081-3-3-a-adb74c37b4 to /sysroot/etc/hostname Apr 30 00:55:36.758956 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:55:36.771345 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:55:36.777883 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:55:36.783550 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:55:36.789299 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:55:36.891349 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:55:36.896487 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:55:36.902209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:55:36.911282 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:36.926871 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:55:36.932929 ignition[927]: INFO : Ignition 2.19.0 Apr 30 00:55:36.932929 ignition[927]: INFO : Stage: mount Apr 30 00:55:36.934195 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:36.934195 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:36.934195 ignition[927]: INFO : mount: mount passed Apr 30 00:55:36.934195 ignition[927]: INFO : Ignition finished successfully Apr 30 00:55:36.935695 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:55:36.943414 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:55:37.086735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:55:37.100632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:55:37.112313 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Apr 30 00:55:37.114428 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:37.114671 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:37.114694 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:55:37.117296 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:55:37.117345 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:55:37.120458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:55:37.152173 ignition[956]: INFO : Ignition 2.19.0 Apr 30 00:55:37.152173 ignition[956]: INFO : Stage: files Apr 30 00:55:37.153252 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:37.153252 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:37.154894 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:55:37.154894 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:55:37.154894 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:55:37.158733 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:55:37.158733 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:55:37.162134 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:55:37.158962 unknown[956]: wrote ssh authorized keys file for user: core Apr 30 00:55:37.165059 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:55:37.165059 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:55:37.916822 systemd-networkd[778]: eth1: Gained IPv6LL Apr 30 00:55:38.300811 systemd-networkd[778]: eth0: Gained IPv6LL Apr 30 00:55:39.092392 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:55:45.529884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:55:45.532045 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:55:45.532045 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:55:46.108102 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:55:46.184674 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:55:46.184674 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:55:46.186962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:55:46.707041 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:55:46.904033 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:55:46.904033 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:55:46.907416 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:55:46.908747 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:55:46.908747 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:55:46.908747 ignition[956]: INFO : files: files passed Apr 30 00:55:46.908747 ignition[956]: INFO : Ignition finished successfully Apr 30 00:55:46.909848 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:55:46.919594 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:55:46.923472 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:55:46.928002 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:55:46.928645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:55:46.936853 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:55:46.936853 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:55:46.939049 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:55:46.940848 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:55:46.941775 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:55:46.953994 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:55:46.985337 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:55:46.985578 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:55:46.988630 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:55:46.989671 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:55:46.990776 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:55:46.992144 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:55:47.009485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:55:47.014478 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:55:47.026249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:55:47.026940 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:55:47.028228 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:55:47.029390 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:55:47.029505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:55:47.030782 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:55:47.031381 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:55:47.032335 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:55:47.033285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:55:47.034234 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:55:47.035212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:55:47.036224 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:55:47.037332 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:55:47.038268 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:55:47.039325 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:55:47.040164 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:55:47.040298 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:55:47.041546 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:55:47.042556 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:55:47.043555 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:55:47.043627 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:55:47.044664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:55:47.044781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:55:47.046197 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:55:47.046334 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:55:47.047623 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:55:47.047712 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:55:47.048561 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:55:47.048649 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:55:47.062612 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:55:47.065727 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:55:47.070153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:55:47.070643 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:55:47.072607 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:55:47.072705 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:55:47.081532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:55:47.081625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:55:47.087822 ignition[1009]: INFO : Ignition 2.19.0 Apr 30 00:55:47.087822 ignition[1009]: INFO : Stage: umount Apr 30 00:55:47.089533 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:47.089533 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:55:47.089533 ignition[1009]: INFO : umount: umount passed Apr 30 00:55:47.089533 ignition[1009]: INFO : Ignition finished successfully Apr 30 00:55:47.090813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:55:47.091904 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:55:47.092000 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:55:47.094962 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:55:47.095005 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:55:47.095795 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:55:47.095835 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:55:47.096675 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:55:47.096714 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:55:47.097549 systemd[1]: Stopped target network.target - Network. Apr 30 00:55:47.098315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:55:47.098363 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:55:47.099215 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:55:47.099975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:55:47.104378 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:55:47.105539 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:55:47.106951 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:55:47.108382 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:55:47.108469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:55:47.110103 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:55:47.110197 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:55:47.111536 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:55:47.111586 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:55:47.112335 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:55:47.112373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:55:47.113401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:55:47.114094 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:55:47.117373 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:55:47.117474 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:55:47.118714 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:55:47.118820 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:55:47.120317 systemd-networkd[778]: eth0: DHCPv6 lease lost Apr 30 00:55:47.124363 systemd-networkd[778]: eth1: DHCPv6 lease lost Apr 30 00:55:47.126618 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:55:47.126820 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:55:47.128068 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:55:47.128185 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:55:47.133381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:55:47.133862 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:55:47.133916 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:55:47.134705 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:55:47.136990 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:55:47.137085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:55:47.145237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:55:47.145386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:55:47.146551 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:55:47.146600 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:55:47.147218 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:55:47.147252 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:55:47.152006 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:55:47.152246 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:55:47.155918 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:55:47.156156 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:55:47.158716 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:55:47.158787 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:55:47.159703 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:55:47.159735 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:55:47.160655 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:55:47.160697 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:55:47.161975 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:55:47.162011 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:55:47.163357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:55:47.163403 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:47.169427 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:55:47.170709 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:55:47.171298 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:55:47.174437 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:55:47.174520 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:55:47.176024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:55:47.176096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:55:47.177773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:55:47.177819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:47.179438 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:55:47.179522 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:55:47.180956 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:55:47.188450 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:55:47.195972 systemd[1]: Switching root. Apr 30 00:55:47.229895 systemd-journald[236]: Journal stopped Apr 30 00:55:48.110703 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 30 00:55:48.110783 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:55:48.110797 kernel: SELinux: policy capability open_perms=1 Apr 30 00:55:48.110806 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:55:48.110820 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:55:48.110834 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:55:48.110844 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:55:48.110853 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:55:48.110862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:55:48.110876 kernel: audit: type=1403 audit(1745974547.399:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:55:48.110886 systemd[1]: Successfully loaded SELinux policy in 33.679ms. Apr 30 00:55:48.110908 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.554ms. Apr 30 00:55:48.110919 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:55:48.110931 systemd[1]: Detected virtualization kvm. Apr 30 00:55:48.110941 systemd[1]: Detected architecture arm64. Apr 30 00:55:48.110951 systemd[1]: Detected first boot. Apr 30 00:55:48.110961 systemd[1]: Hostname set to . Apr 30 00:55:48.110971 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:55:48.110981 zram_generator::config[1052]: No configuration found. Apr 30 00:55:48.110996 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:55:48.111006 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:55:48.111018 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:55:48.111029 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:55:48.111040 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:55:48.111051 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:55:48.111061 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:55:48.111071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:55:48.111081 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:55:48.111092 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:55:48.111104 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:55:48.111155 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:55:48.111169 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:55:48.111180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:55:48.111191 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:55:48.111201 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:55:48.111211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:55:48.111222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:55:48.111232 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:55:48.111245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:55:48.111256 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:55:48.111364 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:55:48.111377 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:55:48.111387 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:55:48.111398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:55:48.111412 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:55:48.111423 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:55:48.111433 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:55:48.111444 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:55:48.111454 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:55:48.111464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:55:48.111474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:55:48.111484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:55:48.111494 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:55:48.111509 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:55:48.111521 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:55:48.111531 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:55:48.111542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:55:48.111555 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:55:48.111567 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:55:48.111580 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:55:48.111590 systemd[1]: Reached target machines.target - Containers. Apr 30 00:55:48.111601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:55:48.111611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:48.111623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:55:48.111633 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:55:48.111644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:48.111654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:55:48.111664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:48.111676 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:55:48.111686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:48.111697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:55:48.111708 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:55:48.111719 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:55:48.111729 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:55:48.111739 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:55:48.111749 kernel: fuse: init (API version 7.39) Apr 30 00:55:48.111761 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:55:48.111771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:55:48.111781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:55:48.111792 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:55:48.111806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:55:48.111817 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:55:48.111827 systemd[1]: Stopped verity-setup.service. Apr 30 00:55:48.111837 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:55:48.111848 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:55:48.111860 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:55:48.111871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:55:48.111881 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:55:48.111891 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:55:48.111902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:55:48.111914 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:55:48.111924 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:55:48.111934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:48.111974 systemd-journald[1119]: Collecting audit messages is disabled. Apr 30 00:55:48.112003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:48.112015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:48.112026 systemd-journald[1119]: Journal started Apr 30 00:55:48.112049 systemd-journald[1119]: Runtime Journal (/run/log/journal/85969b6e68c84859ab7b60557df998d9) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:55:47.884811 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:55:47.906712 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:55:47.907403 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:55:48.115282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:48.115361 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:55:48.117334 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:55:48.118330 kernel: loop: module loaded Apr 30 00:55:48.118412 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:55:48.120698 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:48.121057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:48.123326 kernel: ACPI: bus type drm_connector registered Apr 30 00:55:48.125743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:55:48.127255 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:55:48.129878 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:55:48.138515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:55:48.147493 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:55:48.148658 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:55:48.155624 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:55:48.163476 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:55:48.167775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:55:48.168493 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:55:48.168610 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:55:48.170327 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:55:48.177963 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:55:48.181664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:55:48.182494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:48.187470 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:55:48.198564 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:55:48.200238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:55:48.203584 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:55:48.206531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:55:48.212467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:55:48.216535 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:55:48.222568 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:55:48.226923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:55:48.229526 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:55:48.230496 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:55:48.232883 systemd-journald[1119]: Time spent on flushing to /var/log/journal/85969b6e68c84859ab7b60557df998d9 is 71.178ms for 1132 entries. Apr 30 00:55:48.232883 systemd-journald[1119]: System Journal (/var/log/journal/85969b6e68c84859ab7b60557df998d9) is 8.0M, max 584.8M, 576.8M free. Apr 30 00:55:48.324380 systemd-journald[1119]: Received client request to flush runtime journal. Apr 30 00:55:48.324423 kernel: loop0: detected capacity change from 0 to 8 Apr 30 00:55:48.324437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:55:48.324448 kernel: loop1: detected capacity change from 0 to 114432 Apr 30 00:55:48.265884 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:55:48.266695 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:55:48.273569 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:55:48.298353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:55:48.311918 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:55:48.324088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:55:48.327595 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:55:48.341910 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:55:48.346375 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:55:48.347505 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Apr 30 00:55:48.347519 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Apr 30 00:55:48.354601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:55:48.366528 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:55:48.372481 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:55:48.381410 kernel: loop2: detected capacity change from 0 to 114328 Apr 30 00:55:48.410331 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:55:48.419780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:55:48.420520 kernel: loop3: detected capacity change from 0 to 201592 Apr 30 00:55:48.454837 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 30 00:55:48.454856 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 30 00:55:48.462040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:55:48.474287 kernel: loop4: detected capacity change from 0 to 8 Apr 30 00:55:48.477350 kernel: loop5: detected capacity change from 0 to 114432 Apr 30 00:55:48.492291 kernel: loop6: detected capacity change from 0 to 114328 Apr 30 00:55:48.507480 kernel: loop7: detected capacity change from 0 to 201592 Apr 30 00:55:48.534755 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 00:55:48.535237 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 30 00:55:48.544863 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:55:48.544889 systemd[1]: Reloading... Apr 30 00:55:48.667442 zram_generator::config[1220]: No configuration found. Apr 30 00:55:48.752971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:55:48.810919 systemd[1]: Reloading finished in 265 ms. Apr 30 00:55:48.814619 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:55:48.837357 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:55:48.838661 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:55:48.849529 systemd[1]: Starting ensure-sysext.service... Apr 30 00:55:48.852633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:55:48.867467 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:55:48.867496 systemd[1]: Reloading... Apr 30 00:55:48.888677 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:55:48.888929 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:55:48.893795 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:55:48.894238 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 30 00:55:48.894428 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 30 00:55:48.899792 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:55:48.900451 systemd-tmpfiles[1259]: Skipping /boot Apr 30 00:55:48.913952 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:55:48.913963 systemd-tmpfiles[1259]: Skipping /boot Apr 30 00:55:48.970286 zram_generator::config[1286]: No configuration found. Apr 30 00:55:49.070514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:55:49.116503 systemd[1]: Reloading finished in 248 ms. Apr 30 00:55:49.135888 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:55:49.140688 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:55:49.154510 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:55:49.162505 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:55:49.169608 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:55:49.173603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:55:49.178997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:55:49.192658 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:55:49.207614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:55:49.217395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:49.224160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:49.229898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:49.234677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:49.235537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:49.237524 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:55:49.239933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:49.240077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:49.247954 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:55:49.260056 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Apr 30 00:55:49.263998 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:55:49.268010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:49.275845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:49.277549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:49.280758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:49.300090 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:55:49.300741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:49.301252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:55:49.304341 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:55:49.307387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:49.307539 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:49.308675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:49.308814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:49.329833 systemd[1]: Finished ensure-sysext.service. Apr 30 00:55:49.338454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:55:49.339807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:55:49.348410 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:55:49.348981 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:55:49.364804 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:55:49.370872 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:49.371024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:49.374276 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:55:49.375331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:55:49.376473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:55:49.376725 augenrules[1383]: No rules Apr 30 00:55:49.380980 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:55:49.413347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:55:49.430002 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:55:49.529678 systemd-resolved[1335]: Positive Trust Anchors: Apr 30 00:55:49.531842 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:55:49.532000 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:55:49.539332 systemd-resolved[1335]: Using system hostname 'ci-4081-3-3-a-adb74c37b4'. Apr 30 00:55:49.542883 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:55:49.544428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:55:49.545057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:55:49.546180 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:55:49.548156 systemd-networkd[1381]: lo: Link UP Apr 30 00:55:49.548167 systemd-networkd[1381]: lo: Gained carrier Apr 30 00:55:49.550233 systemd-networkd[1381]: Enumeration completed Apr 30 00:55:49.550346 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:55:49.551416 systemd[1]: Reached target network.target - Network. Apr 30 00:55:49.552372 systemd-timesyncd[1382]: No network connectivity, watching for changes. Apr 30 00:55:49.554370 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:49.554377 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:55:49.555963 systemd-networkd[1381]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:49.555976 systemd-networkd[1381]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:55:49.556725 systemd-networkd[1381]: eth0: Link UP Apr 30 00:55:49.556735 systemd-networkd[1381]: eth0: Gained carrier Apr 30 00:55:49.556752 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:49.557624 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:55:49.562555 systemd-networkd[1381]: eth1: Link UP Apr 30 00:55:49.562563 systemd-networkd[1381]: eth1: Gained carrier Apr 30 00:55:49.562580 systemd-networkd[1381]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:49.576828 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:55:49.590562 systemd-networkd[1381]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:55:49.592160 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:55:49.602287 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1357) Apr 30 00:55:49.617331 systemd-networkd[1381]: eth0: DHCPv4 address 88.198.162.73/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:55:49.620484 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:55:49.634630 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 00:55:49.634756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:49.641529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:49.644544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:49.648158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:49.648782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:49.648823 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:55:49.650165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:49.652486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:49.671002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:55:49.684485 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:55:49.686305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:49.688440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:49.689756 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:49.689926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:49.694646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:55:49.694722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:55:49.709300 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:55:49.719504 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 00:55:49.719588 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:55:49.719610 kernel: [drm] features: -context_init Apr 30 00:55:49.724277 kernel: [drm] number of scanouts: 1 Apr 30 00:55:49.724361 kernel: [drm] number of cap sets: 0 Apr 30 00:55:49.732034 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 00:55:49.734576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:49.749741 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:55:49.757582 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:55:49.771242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:55:49.773319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:49.779481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:49.831644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:49.927379 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:55:49.936508 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:55:49.950777 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:55:49.981164 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:55:49.983447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:55:49.984907 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:55:49.986737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:55:49.987553 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:55:49.988377 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:55:49.989019 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:55:49.989692 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:55:49.990320 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:55:49.990357 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:55:49.990791 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:55:49.993002 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:55:49.995174 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:55:50.000112 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:55:50.002344 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:55:50.003680 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:55:50.004336 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:55:50.004802 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:55:50.005317 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:55:50.005343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:55:50.009252 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:55:50.017344 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:55:50.014456 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:55:50.017856 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:55:50.021367 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:55:50.023637 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:55:50.024818 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:55:50.027463 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:55:50.040432 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:55:50.045735 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 00:55:50.056255 jq[1452]: false Apr 30 00:55:50.056514 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:55:50.062641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:55:50.067947 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:55:50.074394 coreos-metadata[1449]: Apr 30 00:55:50.072 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 00:55:50.070600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:55:50.071059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:55:50.075496 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:55:50.079827 dbus-daemon[1450]: [system] SELinux support is enabled Apr 30 00:55:50.087501 coreos-metadata[1449]: Apr 30 00:55:50.075 INFO Fetch successful Apr 30 00:55:50.087501 coreos-metadata[1449]: Apr 30 00:55:50.075 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 00:55:50.087501 coreos-metadata[1449]: Apr 30 00:55:50.078 INFO Fetch successful Apr 30 00:55:50.086506 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:55:50.087659 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:55:50.092695 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:55:50.101253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:55:50.102093 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:55:50.116992 jq[1462]: true Apr 30 00:55:50.118418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:55:50.118466 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:55:50.119453 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:55:50.122387 extend-filesystems[1453]: Found loop4 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found loop5 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found loop6 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found loop7 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda1 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda2 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda3 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found usr Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda4 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda6 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda7 Apr 30 00:55:50.122387 extend-filesystems[1453]: Found sda9 Apr 30 00:55:50.122387 extend-filesystems[1453]: Checking size of /dev/sda9 Apr 30 00:55:50.119485 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:55:50.122865 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:55:50.124638 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:55:50.160395 jq[1478]: true Apr 30 00:55:50.168918 tar[1467]: linux-arm64/LICENSE Apr 30 00:55:50.169972 tar[1467]: linux-arm64/helm Apr 30 00:55:50.177197 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:55:50.178384 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:55:50.188220 extend-filesystems[1453]: Resized partition /dev/sda9 Apr 30 00:55:50.203970 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:55:50.204196 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:55:50.211275 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 00:55:50.225930 systemd-logind[1460]: New seat seat0. Apr 30 00:55:50.235242 update_engine[1461]: I20250430 00:55:50.233237 1461 main.cc:92] Flatcar Update Engine starting Apr 30 00:55:50.236802 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:55:50.236830 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 00:55:50.237027 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:55:50.244413 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:55:50.251635 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:55:50.256958 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:55:50.263343 update_engine[1461]: I20250430 00:55:50.261077 1461 update_check_scheduler.cc:74] Next update check in 5m37s Apr 30 00:55:50.273607 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:55:50.327787 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:55:50.333676 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:55:50.345002 systemd[1]: Starting sshkeys.service... Apr 30 00:55:50.390377 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1376) Apr 30 00:55:50.412719 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:55:50.418280 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 00:55:50.438237 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 00:55:50.438237 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 00:55:50.438237 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 00:55:50.447366 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Apr 30 00:55:50.447366 extend-filesystems[1453]: Found sr0 Apr 30 00:55:50.442882 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:55:50.444410 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:55:50.446297 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:55:50.490521 coreos-metadata[1525]: Apr 30 00:55:50.490 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 00:55:50.492958 coreos-metadata[1525]: Apr 30 00:55:50.492 INFO Fetch successful Apr 30 00:55:50.497499 unknown[1525]: wrote ssh authorized keys file for user: core Apr 30 00:55:50.532858 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:55:50.533577 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:55:50.536983 systemd[1]: Finished sshkeys.service. Apr 30 00:55:50.580761 containerd[1485]: time="2025-04-30T00:55:50.580661400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:55:50.592924 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:55:50.616273 containerd[1485]: time="2025-04-30T00:55:50.616141520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.618605 containerd[1485]: time="2025-04-30T00:55:50.618551120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:50.618605 containerd[1485]: time="2025-04-30T00:55:50.618600760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:55:50.618708 containerd[1485]: time="2025-04-30T00:55:50.618618720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.618785840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.618816040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.618879040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.618890040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619187640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619210400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619225680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619235000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619356040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619619640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620265 containerd[1485]: time="2025-04-30T00:55:50.619821600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:50.620475 containerd[1485]: time="2025-04-30T00:55:50.619839440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:55:50.620475 containerd[1485]: time="2025-04-30T00:55:50.619932760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:55:50.620475 containerd[1485]: time="2025-04-30T00:55:50.619971600Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:55:50.625403 containerd[1485]: time="2025-04-30T00:55:50.625042880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:55:50.625529 containerd[1485]: time="2025-04-30T00:55:50.625437200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:55:50.625554 containerd[1485]: time="2025-04-30T00:55:50.625543400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:55:50.625573 containerd[1485]: time="2025-04-30T00:55:50.625560840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:55:50.626272 containerd[1485]: time="2025-04-30T00:55:50.625642360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:55:50.626485 containerd[1485]: time="2025-04-30T00:55:50.626461880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:55:50.626940 containerd[1485]: time="2025-04-30T00:55:50.626910640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:55:50.627325 containerd[1485]: time="2025-04-30T00:55:50.627301640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:55:50.627353 containerd[1485]: time="2025-04-30T00:55:50.627331280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:55:50.627426 containerd[1485]: time="2025-04-30T00:55:50.627408120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:55:50.627451 containerd[1485]: time="2025-04-30T00:55:50.627436560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627480 containerd[1485]: time="2025-04-30T00:55:50.627451400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627480 containerd[1485]: time="2025-04-30T00:55:50.627464760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627637 containerd[1485]: time="2025-04-30T00:55:50.627615480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627663 containerd[1485]: time="2025-04-30T00:55:50.627643080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627682 containerd[1485]: time="2025-04-30T00:55:50.627668080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627805 containerd[1485]: time="2025-04-30T00:55:50.627788760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627838 containerd[1485]: time="2025-04-30T00:55:50.627809680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:55:50.627870 containerd[1485]: time="2025-04-30T00:55:50.627845400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.627897 containerd[1485]: time="2025-04-30T00:55:50.627875600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.627897 containerd[1485]: time="2025-04-30T00:55:50.627890560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.627938 containerd[1485]: time="2025-04-30T00:55:50.627903880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.627938 containerd[1485]: time="2025-04-30T00:55:50.627916680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628197 containerd[1485]: time="2025-04-30T00:55:50.628175200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628225 containerd[1485]: time="2025-04-30T00:55:50.628207200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628249 containerd[1485]: time="2025-04-30T00:55:50.628224880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628280 containerd[1485]: time="2025-04-30T00:55:50.628250280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628308 containerd[1485]: time="2025-04-30T00:55:50.628283440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628308 containerd[1485]: time="2025-04-30T00:55:50.628302760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628347 containerd[1485]: time="2025-04-30T00:55:50.628317240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628398 containerd[1485]: time="2025-04-30T00:55:50.628332560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628422 containerd[1485]: time="2025-04-30T00:55:50.628406200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:55:50.628441 containerd[1485]: time="2025-04-30T00:55:50.628434200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628642 containerd[1485]: time="2025-04-30T00:55:50.628447200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.628672 containerd[1485]: time="2025-04-30T00:55:50.628644120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:55:50.629121 containerd[1485]: time="2025-04-30T00:55:50.629089280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:55:50.629384 containerd[1485]: time="2025-04-30T00:55:50.629130160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:55:50.629419 containerd[1485]: time="2025-04-30T00:55:50.629385280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:55:50.629419 containerd[1485]: time="2025-04-30T00:55:50.629402040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:55:50.629419 containerd[1485]: time="2025-04-30T00:55:50.629412000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.629470 containerd[1485]: time="2025-04-30T00:55:50.629425640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:55:50.629770 containerd[1485]: time="2025-04-30T00:55:50.629752120Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:55:50.629794 containerd[1485]: time="2025-04-30T00:55:50.629772720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:55:50.630581 containerd[1485]: time="2025-04-30T00:55:50.630485040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:55:50.630690 containerd[1485]: time="2025-04-30T00:55:50.630648240Z" level=info msg="Connect containerd service" Apr 30 00:55:50.630721 containerd[1485]: time="2025-04-30T00:55:50.630694200Z" level=info msg="using legacy CRI server" Apr 30 00:55:50.630721 containerd[1485]: time="2025-04-30T00:55:50.630702080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:55:50.631079 containerd[1485]: time="2025-04-30T00:55:50.631057040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:55:50.636390 containerd[1485]: time="2025-04-30T00:55:50.636348680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:55:50.638933 containerd[1485]: time="2025-04-30T00:55:50.638883240Z" level=info msg="Start subscribing containerd event" Apr 30 00:55:50.638985 containerd[1485]: time="2025-04-30T00:55:50.638955800Z" level=info msg="Start recovering state" Apr 30 00:55:50.639046 containerd[1485]: time="2025-04-30T00:55:50.639028920Z" level=info msg="Start event monitor" Apr 30 00:55:50.639071 containerd[1485]: time="2025-04-30T00:55:50.639064200Z" level=info msg="Start snapshots syncer" Apr 30 00:55:50.639094 containerd[1485]: time="2025-04-30T00:55:50.639074480Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:55:50.639094 containerd[1485]: time="2025-04-30T00:55:50.639082560Z" level=info msg="Start streaming server" Apr 30 00:55:50.641228 containerd[1485]: time="2025-04-30T00:55:50.639733520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:55:50.641228 containerd[1485]: time="2025-04-30T00:55:50.639796840Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:55:50.641228 containerd[1485]: time="2025-04-30T00:55:50.639847800Z" level=info msg="containerd successfully booted in 0.060736s" Apr 30 00:55:50.639948 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:55:50.854991 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:55:50.879318 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:55:50.888885 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:55:50.897838 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:55:50.898042 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:55:50.905519 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:55:50.917543 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:55:50.918277 tar[1467]: linux-arm64/README.md Apr 30 00:55:50.929699 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:55:50.937747 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:55:50.939360 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:55:50.941392 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:55:51.356471 systemd-networkd[1381]: eth0: Gained IPv6LL Apr 30 00:55:51.357018 systemd-networkd[1381]: eth1: Gained IPv6LL Apr 30 00:55:51.357138 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:55:51.357773 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:55:51.363339 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:55:51.364610 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:55:51.369833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:55:51.376872 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:55:51.401858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:55:52.122765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:55:52.126011 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:55:52.128316 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:55:52.131352 systemd[1]: Startup finished in 766ms (kernel) + 13.706s (initrd) + 4.765s (userspace) = 19.238s. Apr 30 00:55:52.623054 kubelet[1581]: E0430 00:55:52.622964 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:55:52.626843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:55:52.627060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:02.766572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:56:02.779600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:02.887003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:02.891627 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:02.934640 kubelet[1600]: E0430 00:56:02.934567 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:02.939848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:02.940350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:13.016677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:56:13.025603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:13.142118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:13.157843 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:13.208551 kubelet[1615]: E0430 00:56:13.208498 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:13.211395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:13.211685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:21.539136 systemd-timesyncd[1382]: Contacted time server 131.234.220.231:123 (2.flatcar.pool.ntp.org). Apr 30 00:56:21.539236 systemd-timesyncd[1382]: Initial clock synchronization to Wed 2025-04-30 00:56:21.771494 UTC. Apr 30 00:56:23.267565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:56:23.274631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:23.387306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:23.400897 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:23.453689 kubelet[1630]: E0430 00:56:23.453626 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:23.457134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:23.457587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:33.516999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:56:33.526614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:33.647255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:33.669667 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:33.722228 kubelet[1645]: E0430 00:56:33.722140 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:33.725058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:33.725204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:35.936603 update_engine[1461]: I20250430 00:56:35.936395 1461 update_attempter.cc:509] Updating boot flags... Apr 30 00:56:35.988252 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1661) Apr 30 00:56:36.043306 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1663) Apr 30 00:56:43.766734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:56:43.773722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:43.908556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:43.913416 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:43.958532 kubelet[1678]: E0430 00:56:43.958451 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:43.961633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:43.961904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:54.016713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:56:54.026606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:54.153502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:54.170099 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:54.214850 kubelet[1693]: E0430 00:56:54.214784 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:54.219050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:54.219235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:04.266582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:57:04.283644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:57:04.418806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:57:04.429825 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:57:04.476151 kubelet[1708]: E0430 00:57:04.476009 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:57:04.478353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:57:04.478551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:14.516909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 00:57:14.534761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:57:14.651820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:57:14.658411 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:57:14.700090 kubelet[1723]: E0430 00:57:14.700029 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:57:14.702653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:57:14.702807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:24.766762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 00:57:24.777672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:57:24.908306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:57:24.919980 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:57:24.961782 kubelet[1738]: E0430 00:57:24.961732 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:57:24.964994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:57:24.965248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:27.992051 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:57:27.999579 systemd[1]: Started sshd@0-88.198.162.73:22-139.178.68.195:49742.service - OpenSSH per-connection server daemon (139.178.68.195:49742). Apr 30 00:57:28.981361 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 49742 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:28.982690 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:28.994523 systemd-logind[1460]: New session 1 of user core. Apr 30 00:57:28.996067 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:57:29.003689 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:57:29.016686 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:57:29.034896 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:57:29.039448 (systemd)[1750]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:57:29.145556 systemd[1750]: Queued start job for default target default.target. Apr 30 00:57:29.161813 systemd[1750]: Created slice app.slice - User Application Slice. Apr 30 00:57:29.162279 systemd[1750]: Reached target paths.target - Paths. Apr 30 00:57:29.162305 systemd[1750]: Reached target timers.target - Timers. Apr 30 00:57:29.164319 systemd[1750]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:57:29.179181 systemd[1750]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:57:29.179395 systemd[1750]: Reached target sockets.target - Sockets. Apr 30 00:57:29.179424 systemd[1750]: Reached target basic.target - Basic System. Apr 30 00:57:29.179489 systemd[1750]: Reached target default.target - Main User Target. Apr 30 00:57:29.179535 systemd[1750]: Startup finished in 133ms. Apr 30 00:57:29.179792 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:57:29.187572 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:57:29.881698 systemd[1]: Started sshd@1-88.198.162.73:22-139.178.68.195:49756.service - OpenSSH per-connection server daemon (139.178.68.195:49756). Apr 30 00:57:30.851585 sshd[1761]: Accepted publickey for core from 139.178.68.195 port 49756 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:30.854534 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:30.861687 systemd-logind[1460]: New session 2 of user core. Apr 30 00:57:30.867580 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:57:31.528490 sshd[1761]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:31.533908 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:57:31.534773 systemd[1]: sshd@1-88.198.162.73:22-139.178.68.195:49756.service: Deactivated successfully. Apr 30 00:57:31.538868 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:57:31.542659 systemd-logind[1460]: Removed session 2. Apr 30 00:57:31.703721 systemd[1]: Started sshd@2-88.198.162.73:22-139.178.68.195:49770.service - OpenSSH per-connection server daemon (139.178.68.195:49770). Apr 30 00:57:32.685866 sshd[1768]: Accepted publickey for core from 139.178.68.195 port 49770 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:32.688210 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:32.693801 systemd-logind[1460]: New session 3 of user core. Apr 30 00:57:32.699609 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:57:33.364967 sshd[1768]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:33.370205 systemd[1]: sshd@2-88.198.162.73:22-139.178.68.195:49770.service: Deactivated successfully. Apr 30 00:57:33.372002 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:57:33.373411 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:57:33.374781 systemd-logind[1460]: Removed session 3. Apr 30 00:57:33.545759 systemd[1]: Started sshd@3-88.198.162.73:22-139.178.68.195:49774.service - OpenSSH per-connection server daemon (139.178.68.195:49774). Apr 30 00:57:34.527764 sshd[1775]: Accepted publickey for core from 139.178.68.195 port 49774 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:34.529812 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:34.534320 systemd-logind[1460]: New session 4 of user core. Apr 30 00:57:34.545649 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:57:35.016921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 00:57:35.026648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:57:35.138308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:57:35.143941 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:57:35.183498 kubelet[1786]: E0430 00:57:35.183430 1786 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:57:35.186713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:57:35.187050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:35.212745 sshd[1775]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:35.217204 systemd[1]: sshd@3-88.198.162.73:22-139.178.68.195:49774.service: Deactivated successfully. Apr 30 00:57:35.219086 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:57:35.222464 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:57:35.225083 systemd-logind[1460]: Removed session 4. Apr 30 00:57:35.384941 systemd[1]: Started sshd@4-88.198.162.73:22-139.178.68.195:35288.service - OpenSSH per-connection server daemon (139.178.68.195:35288). Apr 30 00:57:36.377323 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 35288 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:36.379187 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:36.385844 systemd-logind[1460]: New session 5 of user core. Apr 30 00:57:36.394569 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:57:36.909582 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:57:36.909913 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:57:36.923509 sudo[1800]: pam_unix(sudo:session): session closed for user root Apr 30 00:57:37.083555 sshd[1797]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:37.089918 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:57:37.090724 systemd[1]: sshd@4-88.198.162.73:22-139.178.68.195:35288.service: Deactivated successfully. Apr 30 00:57:37.092566 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:57:37.093787 systemd-logind[1460]: Removed session 5. Apr 30 00:57:37.266871 systemd[1]: Started sshd@5-88.198.162.73:22-139.178.68.195:35290.service - OpenSSH per-connection server daemon (139.178.68.195:35290). Apr 30 00:57:38.263120 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 35290 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:38.265758 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:38.270361 systemd-logind[1460]: New session 6 of user core. Apr 30 00:57:38.278497 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:57:38.793015 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:57:38.793350 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:57:38.797579 sudo[1809]: pam_unix(sudo:session): session closed for user root Apr 30 00:57:38.802907 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:57:38.803366 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:57:38.818907 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:57:38.822471 auditctl[1812]: No rules Apr 30 00:57:38.822819 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:57:38.822985 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:57:38.829978 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:57:38.856299 augenrules[1830]: No rules Apr 30 00:57:38.857534 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:57:38.858694 sudo[1808]: pam_unix(sudo:session): session closed for user root Apr 30 00:57:39.021097 sshd[1805]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:39.026482 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:57:39.027614 systemd[1]: sshd@5-88.198.162.73:22-139.178.68.195:35290.service: Deactivated successfully. Apr 30 00:57:39.030027 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:57:39.031223 systemd-logind[1460]: Removed session 6. Apr 30 00:57:39.194667 systemd[1]: Started sshd@6-88.198.162.73:22-139.178.68.195:35292.service - OpenSSH per-connection server daemon (139.178.68.195:35292). Apr 30 00:57:40.179800 sshd[1838]: Accepted publickey for core from 139.178.68.195 port 35292 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:40.182142 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:40.188506 systemd-logind[1460]: New session 7 of user core. Apr 30 00:57:40.202614 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:57:40.703014 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:57:40.703328 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:57:41.002760 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:57:41.003075 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:57:41.249616 dockerd[1856]: time="2025-04-30T00:57:41.249540762Z" level=info msg="Starting up" Apr 30 00:57:41.335191 dockerd[1856]: time="2025-04-30T00:57:41.335146750Z" level=info msg="Loading containers: start." Apr 30 00:57:41.429450 kernel: Initializing XFRM netlink socket Apr 30 00:57:41.508375 systemd-networkd[1381]: docker0: Link UP Apr 30 00:57:41.528215 dockerd[1856]: time="2025-04-30T00:57:41.528145337Z" level=info msg="Loading containers: done." Apr 30 00:57:41.547606 dockerd[1856]: time="2025-04-30T00:57:41.547329930Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:57:41.547606 dockerd[1856]: time="2025-04-30T00:57:41.547501264Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:57:41.547780 dockerd[1856]: time="2025-04-30T00:57:41.547638795Z" level=info msg="Daemon has completed initialization" Apr 30 00:57:41.583194 dockerd[1856]: time="2025-04-30T00:57:41.583019893Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:57:41.583658 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:57:42.643481 containerd[1485]: time="2025-04-30T00:57:42.643164319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:57:43.330248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812437386.mount: Deactivated successfully. Apr 30 00:57:45.266401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 00:57:45.271859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:57:45.414750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:57:45.420773 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:57:45.473852 kubelet[2056]: E0430 00:57:45.473389 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:57:45.476806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:57:45.476946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:45.719193 containerd[1485]: time="2025-04-30T00:57:45.718935599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:45.720871 containerd[1485]: time="2025-04-30T00:57:45.720788503Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233210" Apr 30 00:57:45.721916 containerd[1485]: time="2025-04-30T00:57:45.721835785Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:45.726140 containerd[1485]: time="2025-04-30T00:57:45.726072714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:45.727693 containerd[1485]: time="2025-04-30T00:57:45.727373096Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 3.084165774s" Apr 30 00:57:45.727693 containerd[1485]: time="2025-04-30T00:57:45.727422299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 00:57:45.728519 containerd[1485]: time="2025-04-30T00:57:45.728229722Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:57:47.810323 containerd[1485]: time="2025-04-30T00:57:47.810109445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:47.812074 containerd[1485]: time="2025-04-30T00:57:47.811444266Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529591" Apr 30 00:57:47.813298 containerd[1485]: time="2025-04-30T00:57:47.813247083Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:47.817337 containerd[1485]: time="2025-04-30T00:57:47.817294149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:47.819792 containerd[1485]: time="2025-04-30T00:57:47.819723012Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 2.091388122s" Apr 30 00:57:47.819903 containerd[1485]: time="2025-04-30T00:57:47.819793778Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 00:57:47.820554 containerd[1485]: time="2025-04-30T00:57:47.820494551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:57:49.570390 containerd[1485]: time="2025-04-30T00:57:49.570330662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:49.573106 containerd[1485]: time="2025-04-30T00:57:49.573067543Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482193" Apr 30 00:57:49.574477 containerd[1485]: time="2025-04-30T00:57:49.574415723Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:49.578851 containerd[1485]: time="2025-04-30T00:57:49.578791965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:49.580093 containerd[1485]: time="2025-04-30T00:57:49.580056059Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.759502983s" Apr 30 00:57:49.580218 containerd[1485]: time="2025-04-30T00:57:49.580201789Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 00:57:49.580752 containerd[1485]: time="2025-04-30T00:57:49.580719987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:57:50.571066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490622118.mount: Deactivated successfully. Apr 30 00:57:50.916247 containerd[1485]: time="2025-04-30T00:57:50.916086291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:50.917450 containerd[1485]: time="2025-04-30T00:57:50.917382185Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370377" Apr 30 00:57:50.918842 containerd[1485]: time="2025-04-30T00:57:50.918767366Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:50.922368 containerd[1485]: time="2025-04-30T00:57:50.922238099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:50.923379 containerd[1485]: time="2025-04-30T00:57:50.922877626Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.342012908s" Apr 30 00:57:50.923379 containerd[1485]: time="2025-04-30T00:57:50.922916428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 00:57:50.923581 containerd[1485]: time="2025-04-30T00:57:50.923466628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:57:51.575391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146591937.mount: Deactivated successfully. Apr 30 00:57:53.342390 containerd[1485]: time="2025-04-30T00:57:53.342303139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:53.343733 containerd[1485]: time="2025-04-30T00:57:53.343654555Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Apr 30 00:57:53.344817 containerd[1485]: time="2025-04-30T00:57:53.344662786Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:53.350001 containerd[1485]: time="2025-04-30T00:57:53.349938798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:53.353471 containerd[1485]: time="2025-04-30T00:57:53.353216189Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.429709758s" Apr 30 00:57:53.353471 containerd[1485]: time="2025-04-30T00:57:53.353307556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 00:57:53.354306 containerd[1485]: time="2025-04-30T00:57:53.354210140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:57:53.850481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989908261.mount: Deactivated successfully. Apr 30 00:57:53.857048 containerd[1485]: time="2025-04-30T00:57:53.856892544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:53.858654 containerd[1485]: time="2025-04-30T00:57:53.858574903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 30 00:57:53.859651 containerd[1485]: time="2025-04-30T00:57:53.859564293Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:53.862027 containerd[1485]: time="2025-04-30T00:57:53.861959662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:53.863959 containerd[1485]: time="2025-04-30T00:57:53.863153706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 508.838839ms" Apr 30 00:57:53.863959 containerd[1485]: time="2025-04-30T00:57:53.863203389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:57:53.863959 containerd[1485]: time="2025-04-30T00:57:53.863695984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:57:54.558459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848136053.mount: Deactivated successfully. Apr 30 00:57:55.516874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 00:57:55.527665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:57:55.661485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:57:55.672863 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:57:55.716787 kubelet[2190]: E0430 00:57:55.716729 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:57:55.720140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:57:55.720462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:57:58.332792 containerd[1485]: time="2025-04-30T00:57:58.332691105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:58.335304 containerd[1485]: time="2025-04-30T00:57:58.334846411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Apr 30 00:57:58.336488 containerd[1485]: time="2025-04-30T00:57:58.336441809Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:58.340911 containerd[1485]: time="2025-04-30T00:57:58.340832703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:57:58.342640 containerd[1485]: time="2025-04-30T00:57:58.342528986Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.478801279s" Apr 30 00:57:58.342640 containerd[1485]: time="2025-04-30T00:57:58.342571908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 00:58:03.612603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:58:03.622819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:58:03.667880 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-7.scope)... Apr 30 00:58:03.668032 systemd[1]: Reloading... Apr 30 00:58:03.780634 zram_generator::config[2270]: No configuration found. Apr 30 00:58:03.888451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:58:03.961204 systemd[1]: Reloading finished in 292 ms. Apr 30 00:58:04.020333 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:58:04.020503 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:58:04.020977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:58:04.029694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:58:04.166092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:58:04.175764 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:58:04.224027 kubelet[2321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:58:04.224027 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:58:04.224027 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:58:04.224416 kubelet[2321]: I0430 00:58:04.224081 2321 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:58:05.024314 kubelet[2321]: I0430 00:58:05.023628 2321 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:58:05.024314 kubelet[2321]: I0430 00:58:05.023680 2321 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:58:05.024314 kubelet[2321]: I0430 00:58:05.024140 2321 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:58:05.049740 kubelet[2321]: E0430 00:58:05.049691 2321 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://88.198.162.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:05.051751 kubelet[2321]: I0430 00:58:05.051216 2321 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:58:05.066398 kubelet[2321]: E0430 00:58:05.065583 2321 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:58:05.066398 kubelet[2321]: I0430 00:58:05.065638 2321 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:58:05.069153 kubelet[2321]: I0430 00:58:05.069121 2321 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:58:05.069648 kubelet[2321]: I0430 00:58:05.069608 2321 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:58:05.070281 kubelet[2321]: I0430 00:58:05.069759 2321 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-a-adb74c37b4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:58:05.070281 kubelet[2321]: I0430 00:58:05.070052 2321 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:58:05.070281 kubelet[2321]: I0430 00:58:05.070065 2321 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:58:05.070523 kubelet[2321]: I0430 00:58:05.070505 2321 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:58:05.075202 kubelet[2321]: I0430 00:58:05.075160 2321 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:58:05.075501 kubelet[2321]: I0430 00:58:05.075478 2321 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:58:05.075623 kubelet[2321]: I0430 00:58:05.075607 2321 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:58:05.076151 kubelet[2321]: I0430 00:58:05.075709 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:58:05.080863 kubelet[2321]: W0430 00:58:05.080806 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.198.162.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-adb74c37b4&limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:05.080968 kubelet[2321]: E0430 00:58:05.080867 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.198.162.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-adb74c37b4&limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:05.081606 kubelet[2321]: W0430 00:58:05.081256 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.198.162.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:05.081702 kubelet[2321]: E0430 00:58:05.081621 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.198.162.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:05.081736 kubelet[2321]: I0430 00:58:05.081720 2321 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:58:05.082360 kubelet[2321]: I0430 00:58:05.082337 2321 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:58:05.082497 kubelet[2321]: W0430 00:58:05.082478 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:58:05.083966 kubelet[2321]: I0430 00:58:05.083916 2321 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:58:05.083966 kubelet[2321]: I0430 00:58:05.083955 2321 server.go:1287] "Started kubelet" Apr 30 00:58:05.088556 kubelet[2321]: E0430 00:58:05.088160 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.198.162.73:6443/api/v1/namespaces/default/events\": dial tcp 88.198.162.73:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-a-adb74c37b4.183af2b9bb409e6d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-a-adb74c37b4,UID:ci-4081-3-3-a-adb74c37b4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-a-adb74c37b4,},FirstTimestamp:2025-04-30 00:58:05.083934317 +0000 UTC m=+0.902926003,LastTimestamp:2025-04-30 00:58:05.083934317 +0000 UTC m=+0.902926003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-a-adb74c37b4,}" Apr 30 00:58:05.089234 kubelet[2321]: I0430 00:58:05.089114 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:58:05.093109 kubelet[2321]: I0430 00:58:05.090874 2321 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:58:05.093109 kubelet[2321]: I0430 00:58:05.091846 2321 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:58:05.093726 kubelet[2321]: I0430 00:58:05.093663 2321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:58:05.094009 kubelet[2321]: I0430 00:58:05.093994 2321 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:58:05.094344 kubelet[2321]: I0430 00:58:05.094326 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:58:05.094829 kubelet[2321]: I0430 00:58:05.094798 2321 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:58:05.095110 kubelet[2321]: E0430 00:58:05.095083 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:05.096236 kubelet[2321]: I0430 00:58:05.096208 2321 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:58:05.096408 kubelet[2321]: I0430 00:58:05.096384 2321 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:58:05.096522 kubelet[2321]: E0430 00:58:05.096495 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.162.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-adb74c37b4?timeout=10s\": dial tcp 88.198.162.73:6443: connect: connection refused" interval="200ms" Apr 30 00:58:05.098013 kubelet[2321]: W0430 00:58:05.097970 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.198.162.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:05.098099 kubelet[2321]: E0430 00:58:05.098024 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.198.162.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:05.098644 kubelet[2321]: E0430 00:58:05.098624 2321 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:58:05.099515 kubelet[2321]: I0430 00:58:05.099492 2321 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:58:05.099627 kubelet[2321]: I0430 00:58:05.099615 2321 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:58:05.099779 kubelet[2321]: I0430 00:58:05.099761 2321 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:58:05.106428 kubelet[2321]: I0430 00:58:05.106365 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:58:05.109613 kubelet[2321]: I0430 00:58:05.109487 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:58:05.112688 kubelet[2321]: I0430 00:58:05.112651 2321 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:58:05.112688 kubelet[2321]: I0430 00:58:05.112688 2321 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:58:05.112795 kubelet[2321]: I0430 00:58:05.112700 2321 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:58:05.112795 kubelet[2321]: E0430 00:58:05.112766 2321 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:58:05.118803 kubelet[2321]: W0430 00:58:05.118745 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.198.162.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:05.118927 kubelet[2321]: E0430 00:58:05.118815 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.198.162.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:05.124760 kubelet[2321]: I0430 00:58:05.124728 2321 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:58:05.124760 kubelet[2321]: I0430 00:58:05.124749 2321 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:58:05.124760 kubelet[2321]: I0430 00:58:05.124768 2321 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:58:05.127530 kubelet[2321]: I0430 00:58:05.127504 2321 policy_none.go:49] "None policy: Start" Apr 30 00:58:05.127658 kubelet[2321]: I0430 00:58:05.127647 2321 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:58:05.127718 kubelet[2321]: I0430 00:58:05.127710 2321 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:58:05.135607 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:58:05.150523 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:58:05.154423 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:58:05.163391 kubelet[2321]: I0430 00:58:05.163228 2321 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:58:05.163875 kubelet[2321]: I0430 00:58:05.163664 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:58:05.163875 kubelet[2321]: I0430 00:58:05.163705 2321 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:58:05.164934 kubelet[2321]: I0430 00:58:05.164904 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:58:05.167783 kubelet[2321]: E0430 00:58:05.167659 2321 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:58:05.167783 kubelet[2321]: E0430 00:58:05.167753 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:05.225345 systemd[1]: Created slice kubepods-burstable-podcf33793bd747861f40890132d904c5b4.slice - libcontainer container kubepods-burstable-podcf33793bd747861f40890132d904c5b4.slice. Apr 30 00:58:05.234931 kubelet[2321]: E0430 00:58:05.234885 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.240902 systemd[1]: Created slice kubepods-burstable-pod2f2a55000c233fd03f3cafd02de6c1e1.slice - libcontainer container kubepods-burstable-pod2f2a55000c233fd03f3cafd02de6c1e1.slice. Apr 30 00:58:05.243617 kubelet[2321]: E0430 00:58:05.243468 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.246754 systemd[1]: Created slice kubepods-burstable-poda30b0e1884430d7bb71da573da9166d9.slice - libcontainer container kubepods-burstable-poda30b0e1884430d7bb71da573da9166d9.slice. Apr 30 00:58:05.248548 kubelet[2321]: E0430 00:58:05.248478 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.266650 kubelet[2321]: I0430 00:58:05.266603 2321 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.267335 kubelet[2321]: E0430 00:58:05.267235 2321 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://88.198.162.73:6443/api/v1/nodes\": dial tcp 88.198.162.73:6443: connect: connection refused" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297368 kubelet[2321]: I0430 00:58:05.297178 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297368 kubelet[2321]: I0430 00:58:05.297250 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf33793bd747861f40890132d904c5b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" (UID: \"cf33793bd747861f40890132d904c5b4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297368 kubelet[2321]: I0430 00:58:05.297348 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297631 kubelet[2321]: I0430 00:58:05.297409 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297631 kubelet[2321]: I0430 00:58:05.297528 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297631 kubelet[2321]: I0430 00:58:05.297593 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297767 kubelet[2321]: I0430 00:58:05.297635 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a30b0e1884430d7bb71da573da9166d9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-a-adb74c37b4\" (UID: \"a30b0e1884430d7bb71da573da9166d9\") " pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297767 kubelet[2321]: I0430 00:58:05.297675 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf33793bd747861f40890132d904c5b4-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" (UID: \"cf33793bd747861f40890132d904c5b4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.297767 kubelet[2321]: I0430 00:58:05.297714 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf33793bd747861f40890132d904c5b4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" (UID: \"cf33793bd747861f40890132d904c5b4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.298016 kubelet[2321]: E0430 00:58:05.297933 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.162.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-adb74c37b4?timeout=10s\": dial tcp 88.198.162.73:6443: connect: connection refused" interval="400ms" Apr 30 00:58:05.470155 kubelet[2321]: I0430 00:58:05.470098 2321 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.470683 kubelet[2321]: E0430 00:58:05.470615 2321 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://88.198.162.73:6443/api/v1/nodes\": dial tcp 88.198.162.73:6443: connect: connection refused" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.538339 containerd[1485]: time="2025-04-30T00:58:05.538111556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-a-adb74c37b4,Uid:cf33793bd747861f40890132d904c5b4,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:05.545630 containerd[1485]: time="2025-04-30T00:58:05.545483061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-a-adb74c37b4,Uid:2f2a55000c233fd03f3cafd02de6c1e1,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:05.550691 containerd[1485]: time="2025-04-30T00:58:05.550491524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-a-adb74c37b4,Uid:a30b0e1884430d7bb71da573da9166d9,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:05.699150 kubelet[2321]: E0430 00:58:05.699087 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.162.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-adb74c37b4?timeout=10s\": dial tcp 88.198.162.73:6443: connect: connection refused" interval="800ms" Apr 30 00:58:05.873905 kubelet[2321]: I0430 00:58:05.873691 2321 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.874242 kubelet[2321]: E0430 00:58:05.874124 2321 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://88.198.162.73:6443/api/v1/nodes\": dial tcp 88.198.162.73:6443: connect: connection refused" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:05.912071 kubelet[2321]: W0430 00:58:05.911940 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.198.162.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:05.912294 kubelet[2321]: E0430 00:58:05.912089 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.198.162.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:05.987220 kubelet[2321]: W0430 00:58:05.987137 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.198.162.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-adb74c37b4&limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:05.987220 kubelet[2321]: E0430 00:58:05.987212 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.198.162.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-adb74c37b4&limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:06.115026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510993860.mount: Deactivated successfully. Apr 30 00:58:06.122653 containerd[1485]: time="2025-04-30T00:58:06.122517174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:58:06.126369 containerd[1485]: time="2025-04-30T00:58:06.126238478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 00:58:06.127424 containerd[1485]: time="2025-04-30T00:58:06.127332671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:58:06.128860 containerd[1485]: time="2025-04-30T00:58:06.128750438Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:58:06.129804 containerd[1485]: time="2025-04-30T00:58:06.129705323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:58:06.130739 containerd[1485]: time="2025-04-30T00:58:06.130599452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:58:06.130739 containerd[1485]: time="2025-04-30T00:58:06.130699724Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:58:06.135041 containerd[1485]: time="2025-04-30T00:58:06.134982263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:58:06.137113 containerd[1485]: time="2025-04-30T00:58:06.136664130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.083237ms" Apr 30 00:58:06.137771 containerd[1485]: time="2025-04-30T00:58:06.137733325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.507419ms" Apr 30 00:58:06.138911 containerd[1485]: time="2025-04-30T00:58:06.138875994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.304197ms" Apr 30 00:58:06.268628 kubelet[2321]: W0430 00:58:06.268537 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.198.162.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:06.268628 kubelet[2321]: E0430 00:58:06.268592 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.198.162.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:06.282024 containerd[1485]: time="2025-04-30T00:58:06.281539703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:06.282024 containerd[1485]: time="2025-04-30T00:58:06.281691131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:06.282024 containerd[1485]: time="2025-04-30T00:58:06.281747646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:06.283245 containerd[1485]: time="2025-04-30T00:58:06.282695971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286790286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286845881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286862240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286931194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286794485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286844761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286860160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:06.287066 containerd[1485]: time="2025-04-30T00:58:06.286931394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:06.315129 kubelet[2321]: W0430 00:58:06.314250 2321 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.198.162.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.198.162.73:6443: connect: connection refused Apr 30 00:58:06.315129 kubelet[2321]: E0430 00:58:06.314357 2321 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.198.162.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.162.73:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:58:06.315491 systemd[1]: Started cri-containerd-168c98dd6a139fce82a0db37352a15be0236dd35433a60cc0f5f9142bf252100.scope - libcontainer container 168c98dd6a139fce82a0db37352a15be0236dd35433a60cc0f5f9142bf252100. Apr 30 00:58:06.317933 systemd[1]: Started cri-containerd-28440223f7580d3c51e94576c994ce8ffbd08e58af467e36a51e9f5209370f51.scope - libcontainer container 28440223f7580d3c51e94576c994ce8ffbd08e58af467e36a51e9f5209370f51. Apr 30 00:58:06.323473 systemd[1]: Started cri-containerd-692622938338c6a7d20af7aff246f26eb858a4456533a17d5fa280875d8d61b1.scope - libcontainer container 692622938338c6a7d20af7aff246f26eb858a4456533a17d5fa280875d8d61b1. Apr 30 00:58:06.379032 containerd[1485]: time="2025-04-30T00:58:06.377864292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-a-adb74c37b4,Uid:cf33793bd747861f40890132d904c5b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"168c98dd6a139fce82a0db37352a15be0236dd35433a60cc0f5f9142bf252100\"" Apr 30 00:58:06.379032 containerd[1485]: time="2025-04-30T00:58:06.378149309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-a-adb74c37b4,Uid:2f2a55000c233fd03f3cafd02de6c1e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"28440223f7580d3c51e94576c994ce8ffbd08e58af467e36a51e9f5209370f51\"" Apr 30 00:58:06.384132 containerd[1485]: time="2025-04-30T00:58:06.384086598Z" level=info msg="CreateContainer within sandbox \"28440223f7580d3c51e94576c994ce8ffbd08e58af467e36a51e9f5209370f51\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:58:06.385542 containerd[1485]: time="2025-04-30T00:58:06.385478527Z" level=info msg="CreateContainer within sandbox \"168c98dd6a139fce82a0db37352a15be0236dd35433a60cc0f5f9142bf252100\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:58:06.393081 containerd[1485]: time="2025-04-30T00:58:06.392986851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-a-adb74c37b4,Uid:a30b0e1884430d7bb71da573da9166d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"692622938338c6a7d20af7aff246f26eb858a4456533a17d5fa280875d8d61b1\"" Apr 30 00:58:06.397579 containerd[1485]: time="2025-04-30T00:58:06.397544209Z" level=info msg="CreateContainer within sandbox \"692622938338c6a7d20af7aff246f26eb858a4456533a17d5fa280875d8d61b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:58:06.404928 containerd[1485]: time="2025-04-30T00:58:06.404763315Z" level=info msg="CreateContainer within sandbox \"168c98dd6a139fce82a0db37352a15be0236dd35433a60cc0f5f9142bf252100\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ea96e0d226c398ffb4adb00bd368d70fe10db42a862c82a2a5b03d37384ca87\"" Apr 30 00:58:06.405735 containerd[1485]: time="2025-04-30T00:58:06.405680282Z" level=info msg="StartContainer for \"7ea96e0d226c398ffb4adb00bd368d70fe10db42a862c82a2a5b03d37384ca87\"" Apr 30 00:58:06.412438 containerd[1485]: time="2025-04-30T00:58:06.412289557Z" level=info msg="CreateContainer within sandbox \"28440223f7580d3c51e94576c994ce8ffbd08e58af467e36a51e9f5209370f51\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49f3b30506af8c57299ee59e5b9c73ac37e47e15eddd147b033fabbfb1e0646c\"" Apr 30 00:58:06.412755 containerd[1485]: time="2025-04-30T00:58:06.412728963Z" level=info msg="StartContainer for \"49f3b30506af8c57299ee59e5b9c73ac37e47e15eddd147b033fabbfb1e0646c\"" Apr 30 00:58:06.419609 containerd[1485]: time="2025-04-30T00:58:06.419561660Z" level=info msg="CreateContainer within sandbox \"692622938338c6a7d20af7aff246f26eb858a4456533a17d5fa280875d8d61b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bfdde8e6ca87d0a9eebe4b7028ca15f084fb960d1da40730950919ee09c4f9db\"" Apr 30 00:58:06.420182 containerd[1485]: time="2025-04-30T00:58:06.420036742Z" level=info msg="StartContainer for \"bfdde8e6ca87d0a9eebe4b7028ca15f084fb960d1da40730950919ee09c4f9db\"" Apr 30 00:58:06.447453 systemd[1]: Started cri-containerd-7ea96e0d226c398ffb4adb00bd368d70fe10db42a862c82a2a5b03d37384ca87.scope - libcontainer container 7ea96e0d226c398ffb4adb00bd368d70fe10db42a862c82a2a5b03d37384ca87. Apr 30 00:58:06.455542 systemd[1]: Started cri-containerd-49f3b30506af8c57299ee59e5b9c73ac37e47e15eddd147b033fabbfb1e0646c.scope - libcontainer container 49f3b30506af8c57299ee59e5b9c73ac37e47e15eddd147b033fabbfb1e0646c. Apr 30 00:58:06.463618 systemd[1]: Started cri-containerd-bfdde8e6ca87d0a9eebe4b7028ca15f084fb960d1da40730950919ee09c4f9db.scope - libcontainer container bfdde8e6ca87d0a9eebe4b7028ca15f084fb960d1da40730950919ee09c4f9db. Apr 30 00:58:06.499817 kubelet[2321]: E0430 00:58:06.499770 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.162.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-adb74c37b4?timeout=10s\": dial tcp 88.198.162.73:6443: connect: connection refused" interval="1.6s" Apr 30 00:58:06.515292 containerd[1485]: time="2025-04-30T00:58:06.514024917Z" level=info msg="StartContainer for \"49f3b30506af8c57299ee59e5b9c73ac37e47e15eddd147b033fabbfb1e0646c\" returns successfully" Apr 30 00:58:06.531292 containerd[1485]: time="2025-04-30T00:58:06.530393617Z" level=info msg="StartContainer for \"7ea96e0d226c398ffb4adb00bd368d70fe10db42a862c82a2a5b03d37384ca87\" returns successfully" Apr 30 00:58:06.534560 containerd[1485]: time="2025-04-30T00:58:06.534024848Z" level=info msg="StartContainer for \"bfdde8e6ca87d0a9eebe4b7028ca15f084fb960d1da40730950919ee09c4f9db\" returns successfully" Apr 30 00:58:06.676672 kubelet[2321]: I0430 00:58:06.676568 2321 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:07.144720 kubelet[2321]: E0430 00:58:07.144690 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:07.147561 kubelet[2321]: E0430 00:58:07.147529 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:07.152338 kubelet[2321]: E0430 00:58:07.152120 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:08.151886 kubelet[2321]: E0430 00:58:08.151840 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:08.153403 kubelet[2321]: E0430 00:58:08.152297 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:08.615918 kubelet[2321]: I0430 00:58:08.615875 2321 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:08.615918 kubelet[2321]: E0430 00:58:08.615920 2321 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-a-adb74c37b4\": node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:08.622373 kubelet[2321]: E0430 00:58:08.622300 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:08.723043 kubelet[2321]: E0430 00:58:08.722997 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:08.795100 kubelet[2321]: E0430 00:58:08.795061 2321 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:08.823464 kubelet[2321]: E0430 00:58:08.823398 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:08.924525 kubelet[2321]: E0430 00:58:08.924159 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:09.025365 kubelet[2321]: E0430 00:58:09.025308 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:09.125560 kubelet[2321]: E0430 00:58:09.125472 2321 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:09.197169 kubelet[2321]: I0430 00:58:09.196200 2321 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:09.210341 kubelet[2321]: E0430 00:58:09.209977 2321 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:09.210341 kubelet[2321]: I0430 00:58:09.210010 2321 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:09.213774 kubelet[2321]: E0430 00:58:09.213715 2321 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:09.213995 kubelet[2321]: I0430 00:58:09.213858 2321 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:09.216382 kubelet[2321]: E0430 00:58:09.216219 2321 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-a-adb74c37b4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:10.084685 kubelet[2321]: I0430 00:58:10.084631 2321 apiserver.go:52] "Watching apiserver" Apr 30 00:58:10.096392 kubelet[2321]: I0430 00:58:10.096341 2321 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:58:10.634988 kubelet[2321]: I0430 00:58:10.634905 2321 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.129175 systemd[1]: Reloading requested from client PID 2594 ('systemctl') (unit session-7.scope)... Apr 30 00:58:11.129192 systemd[1]: Reloading... Apr 30 00:58:11.221719 zram_generator::config[2637]: No configuration found. Apr 30 00:58:11.326772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:58:11.419082 systemd[1]: Reloading finished in 289 ms. Apr 30 00:58:11.459494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:58:11.474664 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:58:11.475132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:58:11.475234 systemd[1]: kubelet.service: Consumed 1.332s CPU time, 124.6M memory peak, 0B memory swap peak. Apr 30 00:58:11.482687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:58:11.614408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:58:11.627833 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:58:11.683327 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:58:11.684307 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:58:11.684307 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:58:11.684307 kubelet[2679]: I0430 00:58:11.684198 2679 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:58:11.692497 kubelet[2679]: I0430 00:58:11.691792 2679 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:58:11.693149 kubelet[2679]: I0430 00:58:11.692581 2679 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:58:11.693149 kubelet[2679]: I0430 00:58:11.692875 2679 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:58:11.695941 kubelet[2679]: I0430 00:58:11.695885 2679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:58:11.699776 kubelet[2679]: I0430 00:58:11.699749 2679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:58:11.703528 kubelet[2679]: E0430 00:58:11.703492 2679 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:58:11.703796 kubelet[2679]: I0430 00:58:11.703779 2679 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:58:11.706397 kubelet[2679]: I0430 00:58:11.706369 2679 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:58:11.707187 kubelet[2679]: I0430 00:58:11.706780 2679 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:58:11.707187 kubelet[2679]: I0430 00:58:11.706814 2679 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-a-adb74c37b4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:58:11.707187 kubelet[2679]: I0430 00:58:11.707085 2679 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:58:11.707187 kubelet[2679]: I0430 00:58:11.707094 2679 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:58:11.707448 kubelet[2679]: I0430 00:58:11.707147 2679 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:58:11.707679 kubelet[2679]: I0430 00:58:11.707661 2679 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:58:11.707769 kubelet[2679]: I0430 00:58:11.707759 2679 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:58:11.707836 kubelet[2679]: I0430 00:58:11.707827 2679 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:58:11.707892 kubelet[2679]: I0430 00:58:11.707884 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:58:11.711866 kubelet[2679]: I0430 00:58:11.711833 2679 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:58:11.713660 kubelet[2679]: I0430 00:58:11.713628 2679 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:58:11.715000 kubelet[2679]: I0430 00:58:11.714885 2679 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:58:11.715000 kubelet[2679]: I0430 00:58:11.714979 2679 server.go:1287] "Started kubelet" Apr 30 00:58:11.719046 kubelet[2679]: I0430 00:58:11.718997 2679 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:58:11.723165 kubelet[2679]: I0430 00:58:11.721174 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:58:11.723165 kubelet[2679]: I0430 00:58:11.721496 2679 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:58:11.723165 kubelet[2679]: I0430 00:58:11.722789 2679 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:58:11.727077 kubelet[2679]: I0430 00:58:11.726079 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:58:11.733274 kubelet[2679]: I0430 00:58:11.732433 2679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:58:11.735774 kubelet[2679]: I0430 00:58:11.735745 2679 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:58:11.736287 kubelet[2679]: E0430 00:58:11.736003 2679 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-adb74c37b4\" not found" Apr 30 00:58:11.744062 kubelet[2679]: I0430 00:58:11.743716 2679 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:58:11.744062 kubelet[2679]: I0430 00:58:11.743863 2679 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:58:11.751109 kubelet[2679]: I0430 00:58:11.751064 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:58:11.752440 kubelet[2679]: I0430 00:58:11.752419 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:58:11.752538 kubelet[2679]: I0430 00:58:11.752529 2679 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:58:11.752616 kubelet[2679]: I0430 00:58:11.752606 2679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:58:11.752687 kubelet[2679]: I0430 00:58:11.752660 2679 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:58:11.752785 kubelet[2679]: E0430 00:58:11.752769 2679 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:58:11.768850 kubelet[2679]: I0430 00:58:11.768806 2679 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:58:11.769161 kubelet[2679]: I0430 00:58:11.769137 2679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:58:11.771200 kubelet[2679]: I0430 00:58:11.771174 2679 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:58:11.827269 kubelet[2679]: I0430 00:58:11.827229 2679 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:58:11.827404 kubelet[2679]: I0430 00:58:11.827295 2679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:58:11.827404 kubelet[2679]: I0430 00:58:11.827333 2679 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:58:11.827573 kubelet[2679]: I0430 00:58:11.827553 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:58:11.827614 kubelet[2679]: I0430 00:58:11.827575 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:58:11.827614 kubelet[2679]: I0430 00:58:11.827599 2679 policy_none.go:49] "None policy: Start" Apr 30 00:58:11.827614 kubelet[2679]: I0430 00:58:11.827608 2679 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:58:11.827687 kubelet[2679]: I0430 00:58:11.827619 2679 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:58:11.827734 kubelet[2679]: I0430 00:58:11.827721 2679 state_mem.go:75] "Updated machine memory state" Apr 30 00:58:11.832802 kubelet[2679]: I0430 00:58:11.832092 2679 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:58:11.832802 kubelet[2679]: I0430 00:58:11.832295 2679 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:58:11.832802 kubelet[2679]: I0430 00:58:11.832306 2679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:58:11.833193 kubelet[2679]: I0430 00:58:11.833063 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:58:11.835589 kubelet[2679]: E0430 00:58:11.835510 2679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:58:11.854110 kubelet[2679]: I0430 00:58:11.853422 2679 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.854110 kubelet[2679]: I0430 00:58:11.853872 2679 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.854383 kubelet[2679]: I0430 00:58:11.853425 2679 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.870097 kubelet[2679]: E0430 00:58:11.870040 2679 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-a-adb74c37b4\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.937880 kubelet[2679]: I0430 00:58:11.937674 2679 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944679 kubelet[2679]: I0430 00:58:11.944406 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf33793bd747861f40890132d904c5b4-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" (UID: \"cf33793bd747861f40890132d904c5b4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944679 kubelet[2679]: I0430 00:58:11.944449 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf33793bd747861f40890132d904c5b4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" (UID: \"cf33793bd747861f40890132d904c5b4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944679 kubelet[2679]: I0430 00:58:11.944473 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944679 kubelet[2679]: I0430 00:58:11.944492 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a30b0e1884430d7bb71da573da9166d9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-a-adb74c37b4\" (UID: \"a30b0e1884430d7bb71da573da9166d9\") " pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944679 kubelet[2679]: I0430 00:58:11.944511 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf33793bd747861f40890132d904c5b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-a-adb74c37b4\" (UID: \"cf33793bd747861f40890132d904c5b4\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944978 kubelet[2679]: I0430 00:58:11.944531 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944978 kubelet[2679]: I0430 00:58:11.944550 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944978 kubelet[2679]: I0430 00:58:11.944568 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.944978 kubelet[2679]: I0430 00:58:11.944588 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f2a55000c233fd03f3cafd02de6c1e1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-a-adb74c37b4\" (UID: \"2f2a55000c233fd03f3cafd02de6c1e1\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.952505 kubelet[2679]: I0430 00:58:11.952402 2679 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:11.952505 kubelet[2679]: I0430 00:58:11.952488 2679 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:12.129135 sudo[2712]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:58:12.129463 sudo[2712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:58:12.640992 sudo[2712]: pam_unix(sudo:session): session closed for user root Apr 30 00:58:12.710297 kubelet[2679]: I0430 00:58:12.709618 2679 apiserver.go:52] "Watching apiserver" Apr 30 00:58:12.744437 kubelet[2679]: I0430 00:58:12.744378 2679 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:58:12.805521 kubelet[2679]: I0430 00:58:12.805488 2679 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:12.814548 kubelet[2679]: E0430 00:58:12.814237 2679 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-a-adb74c37b4\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" Apr 30 00:58:12.834604 kubelet[2679]: I0430 00:58:12.834388 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-a-adb74c37b4" podStartSLOduration=2.8343521000000003 podStartE2EDuration="2.8343521s" podCreationTimestamp="2025-04-30 00:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:58:12.833780734 +0000 UTC m=+1.200740909" watchObservedRunningTime="2025-04-30 00:58:12.8343521 +0000 UTC m=+1.201312315" Apr 30 00:58:12.863645 kubelet[2679]: I0430 00:58:12.863584 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-a-adb74c37b4" podStartSLOduration=1.8635639510000002 podStartE2EDuration="1.863563951s" podCreationTimestamp="2025-04-30 00:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:58:12.84939662 +0000 UTC m=+1.216356795" watchObservedRunningTime="2025-04-30 00:58:12.863563951 +0000 UTC m=+1.230524126" Apr 30 00:58:12.884906 kubelet[2679]: I0430 00:58:12.884715 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-a-adb74c37b4" podStartSLOduration=1.884687795 podStartE2EDuration="1.884687795s" podCreationTimestamp="2025-04-30 00:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:58:12.864837396 +0000 UTC m=+1.231797571" watchObservedRunningTime="2025-04-30 00:58:12.884687795 +0000 UTC m=+1.251647970" Apr 30 00:58:14.471291 sudo[1841]: pam_unix(sudo:session): session closed for user root Apr 30 00:58:14.631197 sshd[1838]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:14.636462 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:58:14.636863 systemd[1]: session-7.scope: Consumed 7.165s CPU time, 153.5M memory peak, 0B memory swap peak. Apr 30 00:58:14.637657 systemd[1]: sshd@6-88.198.162.73:22-139.178.68.195:35292.service: Deactivated successfully. Apr 30 00:58:14.641242 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:58:14.642561 systemd-logind[1460]: Removed session 7. Apr 30 00:58:16.873177 kubelet[2679]: I0430 00:58:16.873134 2679 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:58:16.874041 containerd[1485]: time="2025-04-30T00:58:16.873936070Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:58:16.875200 kubelet[2679]: I0430 00:58:16.874170 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:58:17.866849 kubelet[2679]: W0430 00:58:17.866812 2679 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-3-a-adb74c37b4" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object Apr 30 00:58:17.866988 kubelet[2679]: E0430 00:58:17.866858 2679 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-3-a-adb74c37b4\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object" logger="UnhandledError" Apr 30 00:58:17.868298 kubelet[2679]: W0430 00:58:17.868240 2679 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-3-a-adb74c37b4" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object Apr 30 00:58:17.868430 kubelet[2679]: E0430 00:58:17.868309 2679 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-3-a-adb74c37b4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object" logger="UnhandledError" Apr 30 00:58:17.868430 kubelet[2679]: W0430 00:58:17.868379 2679 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-3-a-adb74c37b4" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object Apr 30 00:58:17.868430 kubelet[2679]: E0430 00:58:17.868391 2679 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-3-a-adb74c37b4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object" logger="UnhandledError" Apr 30 00:58:17.868430 kubelet[2679]: W0430 00:58:17.868427 2679 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-3-a-adb74c37b4" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object Apr 30 00:58:17.868585 kubelet[2679]: E0430 00:58:17.868438 2679 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-3-a-adb74c37b4\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object" logger="UnhandledError" Apr 30 00:58:17.868585 kubelet[2679]: I0430 00:58:17.868548 2679 status_manager.go:890] "Failed to get status for pod" podUID="f9682ec3-8b06-480d-8336-a286215ab182" pod="kube-system/cilium-97bxf" err="pods \"cilium-97bxf\" is forbidden: User \"system:node:ci-4081-3-3-a-adb74c37b4\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-a-adb74c37b4' and this object" Apr 30 00:58:17.872256 systemd[1]: Created slice kubepods-burstable-podf9682ec3_8b06_480d_8336_a286215ab182.slice - libcontainer container kubepods-burstable-podf9682ec3_8b06_480d_8336_a286215ab182.slice. Apr 30 00:58:17.880281 systemd[1]: Created slice kubepods-besteffort-pod0725573c_4d88_46bb_953e_3f701c972a3e.slice - libcontainer container kubepods-besteffort-pod0725573c_4d88_46bb_953e_3f701c972a3e.slice. Apr 30 00:58:17.885364 kubelet[2679]: I0430 00:58:17.884919 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-run\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885364 kubelet[2679]: I0430 00:58:17.884958 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-xtables-lock\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885364 kubelet[2679]: I0430 00:58:17.884976 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-kernel\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885364 kubelet[2679]: I0430 00:58:17.884993 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0725573c-4d88-46bb-953e-3f701c972a3e-kube-proxy\") pod \"kube-proxy-h9qt5\" (UID: \"0725573c-4d88-46bb-953e-3f701c972a3e\") " pod="kube-system/kube-proxy-h9qt5" Apr 30 00:58:17.885364 kubelet[2679]: I0430 00:58:17.885011 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mrd\" (UniqueName: \"kubernetes.io/projected/0725573c-4d88-46bb-953e-3f701c972a3e-kube-api-access-85mrd\") pod \"kube-proxy-h9qt5\" (UID: \"0725573c-4d88-46bb-953e-3f701c972a3e\") " pod="kube-system/kube-proxy-h9qt5" Apr 30 00:58:17.885784 kubelet[2679]: I0430 00:58:17.885030 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9682ec3-8b06-480d-8336-a286215ab182-clustermesh-secrets\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885784 kubelet[2679]: I0430 00:58:17.885047 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9682ec3-8b06-480d-8336-a286215ab182-cilium-config-path\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885784 kubelet[2679]: I0430 00:58:17.885078 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0725573c-4d88-46bb-953e-3f701c972a3e-lib-modules\") pod \"kube-proxy-h9qt5\" (UID: \"0725573c-4d88-46bb-953e-3f701c972a3e\") " pod="kube-system/kube-proxy-h9qt5" Apr 30 00:58:17.885784 kubelet[2679]: I0430 00:58:17.885105 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-hubble-tls\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885784 kubelet[2679]: I0430 00:58:17.885124 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-etc-cni-netd\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885945 kubelet[2679]: I0430 00:58:17.885148 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p7cj\" (UniqueName: \"kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-kube-api-access-4p7cj\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885945 kubelet[2679]: I0430 00:58:17.885173 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-hostproc\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885945 kubelet[2679]: I0430 00:58:17.885189 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cni-path\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885945 kubelet[2679]: I0430 00:58:17.885207 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-net\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.885945 kubelet[2679]: I0430 00:58:17.885223 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0725573c-4d88-46bb-953e-3f701c972a3e-xtables-lock\") pod \"kube-proxy-h9qt5\" (UID: \"0725573c-4d88-46bb-953e-3f701c972a3e\") " pod="kube-system/kube-proxy-h9qt5" Apr 30 00:58:17.885945 kubelet[2679]: I0430 00:58:17.885238 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-bpf-maps\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.886098 kubelet[2679]: I0430 00:58:17.885253 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-cgroup\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.886098 kubelet[2679]: I0430 00:58:17.885291 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-lib-modules\") pod \"cilium-97bxf\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " pod="kube-system/cilium-97bxf" Apr 30 00:58:17.995335 systemd[1]: Created slice kubepods-besteffort-pod0515d739_6d92_4318_a36a_6a9e3cd51ecf.slice - libcontainer container kubepods-besteffort-pod0515d739_6d92_4318_a36a_6a9e3cd51ecf.slice. Apr 30 00:58:18.086814 kubelet[2679]: I0430 00:58:18.086593 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0515d739-6d92-4318-a36a-6a9e3cd51ecf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fgwhx\" (UID: \"0515d739-6d92-4318-a36a-6a9e3cd51ecf\") " pod="kube-system/cilium-operator-6c4d7847fc-fgwhx" Apr 30 00:58:18.086814 kubelet[2679]: I0430 00:58:18.086687 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnbpd\" (UniqueName: \"kubernetes.io/projected/0515d739-6d92-4318-a36a-6a9e3cd51ecf-kube-api-access-fnbpd\") pod \"cilium-operator-6c4d7847fc-fgwhx\" (UID: \"0515d739-6d92-4318-a36a-6a9e3cd51ecf\") " pod="kube-system/cilium-operator-6c4d7847fc-fgwhx" Apr 30 00:58:18.779142 containerd[1485]: time="2025-04-30T00:58:18.779058688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97bxf,Uid:f9682ec3-8b06-480d-8336-a286215ab182,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:18.792850 containerd[1485]: time="2025-04-30T00:58:18.792467416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9qt5,Uid:0725573c-4d88-46bb-953e-3f701c972a3e,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:18.804071 containerd[1485]: time="2025-04-30T00:58:18.803196293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:18.804071 containerd[1485]: time="2025-04-30T00:58:18.803335287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:18.804071 containerd[1485]: time="2025-04-30T00:58:18.803354247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:18.804071 containerd[1485]: time="2025-04-30T00:58:18.803460882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:18.824513 systemd[1]: Started cri-containerd-2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35.scope - libcontainer container 2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35. Apr 30 00:58:18.831499 containerd[1485]: time="2025-04-30T00:58:18.830276577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:18.831499 containerd[1485]: time="2025-04-30T00:58:18.830412011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:18.833311 containerd[1485]: time="2025-04-30T00:58:18.832245015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:18.834905 containerd[1485]: time="2025-04-30T00:58:18.833590040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:18.859446 systemd[1]: Started cri-containerd-812d78c118978827c02a2ff88444b7496daf1498db6dc7f1051a8075d216e82a.scope - libcontainer container 812d78c118978827c02a2ff88444b7496daf1498db6dc7f1051a8075d216e82a. Apr 30 00:58:18.870901 containerd[1485]: time="2025-04-30T00:58:18.870446400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97bxf,Uid:f9682ec3-8b06-480d-8336-a286215ab182,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\"" Apr 30 00:58:18.878485 containerd[1485]: time="2025-04-30T00:58:18.878350155Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:58:18.896652 containerd[1485]: time="2025-04-30T00:58:18.896614281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9qt5,Uid:0725573c-4d88-46bb-953e-3f701c972a3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"812d78c118978827c02a2ff88444b7496daf1498db6dc7f1051a8075d216e82a\"" Apr 30 00:58:18.898198 containerd[1485]: time="2025-04-30T00:58:18.898136619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fgwhx,Uid:0515d739-6d92-4318-a36a-6a9e3cd51ecf,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:18.901748 containerd[1485]: time="2025-04-30T00:58:18.901556478Z" level=info msg="CreateContainer within sandbox \"812d78c118978827c02a2ff88444b7496daf1498db6dc7f1051a8075d216e82a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:58:18.922864 containerd[1485]: time="2025-04-30T00:58:18.922716645Z" level=info msg="CreateContainer within sandbox \"812d78c118978827c02a2ff88444b7496daf1498db6dc7f1051a8075d216e82a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9cf1d6a153b0bc7d94b698a2d347cca2402d4195afa296d375b197dad1df541\"" Apr 30 00:58:18.923973 containerd[1485]: time="2025-04-30T00:58:18.923938715Z" level=info msg="StartContainer for \"f9cf1d6a153b0bc7d94b698a2d347cca2402d4195afa296d375b197dad1df541\"" Apr 30 00:58:18.931806 containerd[1485]: time="2025-04-30T00:58:18.931671796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:18.931806 containerd[1485]: time="2025-04-30T00:58:18.931746473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:18.932768 containerd[1485]: time="2025-04-30T00:58:18.932478363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:18.932768 containerd[1485]: time="2025-04-30T00:58:18.932669315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:18.959627 systemd[1]: Started cri-containerd-f9cf1d6a153b0bc7d94b698a2d347cca2402d4195afa296d375b197dad1df541.scope - libcontainer container f9cf1d6a153b0bc7d94b698a2d347cca2402d4195afa296d375b197dad1df541. Apr 30 00:58:18.969741 systemd[1]: Started cri-containerd-d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad.scope - libcontainer container d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad. Apr 30 00:58:18.999888 containerd[1485]: time="2025-04-30T00:58:18.999777348Z" level=info msg="StartContainer for \"f9cf1d6a153b0bc7d94b698a2d347cca2402d4195afa296d375b197dad1df541\" returns successfully" Apr 30 00:58:19.021011 containerd[1485]: time="2025-04-30T00:58:19.020902647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fgwhx,Uid:0515d739-6d92-4318-a36a-6a9e3cd51ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\"" Apr 30 00:58:19.847629 kubelet[2679]: I0430 00:58:19.847547 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9qt5" podStartSLOduration=2.8475296119999998 podStartE2EDuration="2.847529612s" podCreationTimestamp="2025-04-30 00:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:58:19.847408537 +0000 UTC m=+8.214368752" watchObservedRunningTime="2025-04-30 00:58:19.847529612 +0000 UTC m=+8.214489827" Apr 30 00:58:24.955009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914198817.mount: Deactivated successfully. Apr 30 00:58:26.364293 containerd[1485]: time="2025-04-30T00:58:26.364051950Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:58:26.366055 containerd[1485]: time="2025-04-30T00:58:26.365960187Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:58:26.367007 containerd[1485]: time="2025-04-30T00:58:26.366932765Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:58:26.368963 containerd[1485]: time="2025-04-30T00:58:26.368820202Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.490416689s" Apr 30 00:58:26.368963 containerd[1485]: time="2025-04-30T00:58:26.368866961Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:58:26.372207 containerd[1485]: time="2025-04-30T00:58:26.371910931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:58:26.374136 containerd[1485]: time="2025-04-30T00:58:26.374026883Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:58:26.390887 containerd[1485]: time="2025-04-30T00:58:26.390820581Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\"" Apr 30 00:58:26.391884 containerd[1485]: time="2025-04-30T00:58:26.391740800Z" level=info msg="StartContainer for \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\"" Apr 30 00:58:26.427640 systemd[1]: Started cri-containerd-d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613.scope - libcontainer container d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613. Apr 30 00:58:26.458755 containerd[1485]: time="2025-04-30T00:58:26.458712156Z" level=info msg="StartContainer for \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\" returns successfully" Apr 30 00:58:26.471929 systemd[1]: cri-containerd-d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613.scope: Deactivated successfully. Apr 30 00:58:26.494062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613-rootfs.mount: Deactivated successfully. Apr 30 00:58:26.642405 containerd[1485]: time="2025-04-30T00:58:26.641994225Z" level=info msg="shim disconnected" id=d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613 namespace=k8s.io Apr 30 00:58:26.642405 containerd[1485]: time="2025-04-30T00:58:26.642205340Z" level=warning msg="cleaning up after shim disconnected" id=d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613 namespace=k8s.io Apr 30 00:58:26.642405 containerd[1485]: time="2025-04-30T00:58:26.642219060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:58:26.855462 containerd[1485]: time="2025-04-30T00:58:26.855382129Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:58:26.873115 containerd[1485]: time="2025-04-30T00:58:26.873039247Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\"" Apr 30 00:58:26.875084 containerd[1485]: time="2025-04-30T00:58:26.874063104Z" level=info msg="StartContainer for \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\"" Apr 30 00:58:26.902459 systemd[1]: Started cri-containerd-311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c.scope - libcontainer container 311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c. Apr 30 00:58:26.931839 containerd[1485]: time="2025-04-30T00:58:26.931788390Z" level=info msg="StartContainer for \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\" returns successfully" Apr 30 00:58:26.949649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:58:26.949878 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:58:26.949951 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:58:26.960275 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:58:26.960692 systemd[1]: cri-containerd-311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c.scope: Deactivated successfully. Apr 30 00:58:26.985643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:58:26.996859 containerd[1485]: time="2025-04-30T00:58:26.996798991Z" level=info msg="shim disconnected" id=311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c namespace=k8s.io Apr 30 00:58:26.996859 containerd[1485]: time="2025-04-30T00:58:26.996856629Z" level=warning msg="cleaning up after shim disconnected" id=311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c namespace=k8s.io Apr 30 00:58:26.996859 containerd[1485]: time="2025-04-30T00:58:26.996865349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:58:27.862411 containerd[1485]: time="2025-04-30T00:58:27.861703068Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:58:27.886565 containerd[1485]: time="2025-04-30T00:58:27.886434875Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\"" Apr 30 00:58:27.888451 containerd[1485]: time="2025-04-30T00:58:27.888403514Z" level=info msg="StartContainer for \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\"" Apr 30 00:58:27.920464 systemd[1]: Started cri-containerd-bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54.scope - libcontainer container bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54. Apr 30 00:58:27.947668 containerd[1485]: time="2025-04-30T00:58:27.947619564Z" level=info msg="StartContainer for \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\" returns successfully" Apr 30 00:58:27.954848 systemd[1]: cri-containerd-bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54.scope: Deactivated successfully. Apr 30 00:58:27.988378 containerd[1485]: time="2025-04-30T00:58:27.988046045Z" level=info msg="shim disconnected" id=bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54 namespace=k8s.io Apr 30 00:58:27.988378 containerd[1485]: time="2025-04-30T00:58:27.988126244Z" level=warning msg="cleaning up after shim disconnected" id=bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54 namespace=k8s.io Apr 30 00:58:27.988378 containerd[1485]: time="2025-04-30T00:58:27.988140483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:58:28.387642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54-rootfs.mount: Deactivated successfully. Apr 30 00:58:28.635909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470858318.mount: Deactivated successfully. Apr 30 00:58:28.868701 containerd[1485]: time="2025-04-30T00:58:28.868656683Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:58:28.890353 containerd[1485]: time="2025-04-30T00:58:28.890140119Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\"" Apr 30 00:58:28.892751 containerd[1485]: time="2025-04-30T00:58:28.892671591Z" level=info msg="StartContainer for \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\"" Apr 30 00:58:28.934750 systemd[1]: Started cri-containerd-0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442.scope - libcontainer container 0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442. Apr 30 00:58:28.973850 systemd[1]: cri-containerd-0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442.scope: Deactivated successfully. Apr 30 00:58:28.977919 containerd[1485]: time="2025-04-30T00:58:28.977359517Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9682ec3_8b06_480d_8336_a286215ab182.slice/cri-containerd-0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442.scope/memory.events\": no such file or directory" Apr 30 00:58:28.979973 containerd[1485]: time="2025-04-30T00:58:28.979920428Z" level=info msg="StartContainer for \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\" returns successfully" Apr 30 00:58:29.032991 containerd[1485]: time="2025-04-30T00:58:29.032870213Z" level=info msg="shim disconnected" id=0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442 namespace=k8s.io Apr 30 00:58:29.032991 containerd[1485]: time="2025-04-30T00:58:29.032928052Z" level=warning msg="cleaning up after shim disconnected" id=0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442 namespace=k8s.io Apr 30 00:58:29.032991 containerd[1485]: time="2025-04-30T00:58:29.032939411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:58:29.127232 containerd[1485]: time="2025-04-30T00:58:29.127087815Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:58:29.128734 containerd[1485]: time="2025-04-30T00:58:29.128667669Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:58:29.129866 containerd[1485]: time="2025-04-30T00:58:29.129296818Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:58:29.132137 containerd[1485]: time="2025-04-30T00:58:29.132003572Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.760001923s" Apr 30 00:58:29.132137 containerd[1485]: time="2025-04-30T00:58:29.132051451Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:58:29.135895 containerd[1485]: time="2025-04-30T00:58:29.135729589Z" level=info msg="CreateContainer within sandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:58:29.153242 containerd[1485]: time="2025-04-30T00:58:29.153157013Z" level=info msg="CreateContainer within sandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\"" Apr 30 00:58:29.157139 containerd[1485]: time="2025-04-30T00:58:29.155906407Z" level=info msg="StartContainer for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\"" Apr 30 00:58:29.194625 systemd[1]: Started cri-containerd-277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86.scope - libcontainer container 277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86. Apr 30 00:58:29.226711 containerd[1485]: time="2025-04-30T00:58:29.226645088Z" level=info msg="StartContainer for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" returns successfully" Apr 30 00:58:29.874238 containerd[1485]: time="2025-04-30T00:58:29.874078072Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:58:29.893667 containerd[1485]: time="2025-04-30T00:58:29.893513382Z" level=info msg="CreateContainer within sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\"" Apr 30 00:58:29.894986 containerd[1485]: time="2025-04-30T00:58:29.894007214Z" level=info msg="StartContainer for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\"" Apr 30 00:58:29.936433 systemd[1]: Started cri-containerd-3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566.scope - libcontainer container 3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566. Apr 30 00:58:30.007098 containerd[1485]: time="2025-04-30T00:58:30.007054630Z" level=info msg="StartContainer for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" returns successfully" Apr 30 00:58:30.156721 kubelet[2679]: I0430 00:58:30.156398 2679 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:58:30.187216 kubelet[2679]: I0430 00:58:30.185291 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fgwhx" podStartSLOduration=3.074275569 podStartE2EDuration="13.185251253s" podCreationTimestamp="2025-04-30 00:58:17 +0000 UTC" firstStartedPulling="2025-04-30 00:58:19.022622141 +0000 UTC m=+7.389582316" lastFinishedPulling="2025-04-30 00:58:29.133597825 +0000 UTC m=+17.500558000" observedRunningTime="2025-04-30 00:58:29.980000236 +0000 UTC m=+18.346960411" watchObservedRunningTime="2025-04-30 00:58:30.185251253 +0000 UTC m=+18.552211428" Apr 30 00:58:30.198084 systemd[1]: Created slice kubepods-burstable-podc831f29d_9dea_4c2c_a945_01af282987a3.slice - libcontainer container kubepods-burstable-podc831f29d_9dea_4c2c_a945_01af282987a3.slice. Apr 30 00:58:30.206359 systemd[1]: Created slice kubepods-burstable-pod763e7098_e478_4644_b35e_7a42ec7e4d51.slice - libcontainer container kubepods-burstable-pod763e7098_e478_4644_b35e_7a42ec7e4d51.slice. Apr 30 00:58:30.270393 kubelet[2679]: I0430 00:58:30.270352 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4nq\" (UniqueName: \"kubernetes.io/projected/763e7098-e478-4644-b35e-7a42ec7e4d51-kube-api-access-kn4nq\") pod \"coredns-668d6bf9bc-f69d7\" (UID: \"763e7098-e478-4644-b35e-7a42ec7e4d51\") " pod="kube-system/coredns-668d6bf9bc-f69d7" Apr 30 00:58:30.270738 kubelet[2679]: I0430 00:58:30.270613 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c831f29d-9dea-4c2c-a945-01af282987a3-config-volume\") pod \"coredns-668d6bf9bc-k2rp6\" (UID: \"c831f29d-9dea-4c2c-a945-01af282987a3\") " pod="kube-system/coredns-668d6bf9bc-k2rp6" Apr 30 00:58:30.270738 kubelet[2679]: I0430 00:58:30.270647 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/763e7098-e478-4644-b35e-7a42ec7e4d51-config-volume\") pod \"coredns-668d6bf9bc-f69d7\" (UID: \"763e7098-e478-4644-b35e-7a42ec7e4d51\") " pod="kube-system/coredns-668d6bf9bc-f69d7" Apr 30 00:58:30.270738 kubelet[2679]: I0430 00:58:30.270669 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crk7j\" (UniqueName: \"kubernetes.io/projected/c831f29d-9dea-4c2c-a945-01af282987a3-kube-api-access-crk7j\") pod \"coredns-668d6bf9bc-k2rp6\" (UID: \"c831f29d-9dea-4c2c-a945-01af282987a3\") " pod="kube-system/coredns-668d6bf9bc-k2rp6" Apr 30 00:58:30.504281 containerd[1485]: time="2025-04-30T00:58:30.503449516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k2rp6,Uid:c831f29d-9dea-4c2c-a945-01af282987a3,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:30.512116 containerd[1485]: time="2025-04-30T00:58:30.511877988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f69d7,Uid:763e7098-e478-4644-b35e-7a42ec7e4d51,Namespace:kube-system,Attempt:0,}" Apr 30 00:58:33.123987 systemd-networkd[1381]: cilium_host: Link UP Apr 30 00:58:33.125617 systemd-networkd[1381]: cilium_net: Link UP Apr 30 00:58:33.125629 systemd-networkd[1381]: cilium_net: Gained carrier Apr 30 00:58:33.126821 systemd-networkd[1381]: cilium_host: Gained carrier Apr 30 00:58:33.245171 systemd-networkd[1381]: cilium_host: Gained IPv6LL Apr 30 00:58:33.250657 systemd-networkd[1381]: cilium_vxlan: Link UP Apr 30 00:58:33.251069 systemd-networkd[1381]: cilium_vxlan: Gained carrier Apr 30 00:58:33.536427 kernel: NET: Registered PF_ALG protocol family Apr 30 00:58:33.660434 systemd-networkd[1381]: cilium_net: Gained IPv6LL Apr 30 00:58:34.275698 systemd-networkd[1381]: lxc_health: Link UP Apr 30 00:58:34.288560 systemd-networkd[1381]: lxc_health: Gained carrier Apr 30 00:58:34.578201 systemd-networkd[1381]: lxc1a0d875cd93e: Link UP Apr 30 00:58:34.584294 kernel: eth0: renamed from tmp6ae15 Apr 30 00:58:34.594691 systemd-networkd[1381]: lxc1a0d875cd93e: Gained carrier Apr 30 00:58:34.597350 systemd-networkd[1381]: lxc747e42d0b10c: Link UP Apr 30 00:58:34.602992 kernel: eth0: renamed from tmpfa53a Apr 30 00:58:34.613129 systemd-networkd[1381]: lxc747e42d0b10c: Gained carrier Apr 30 00:58:34.811848 kubelet[2679]: I0430 00:58:34.811626 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-97bxf" podStartSLOduration=10.315833776 podStartE2EDuration="17.811606967s" podCreationTimestamp="2025-04-30 00:58:17 +0000 UTC" firstStartedPulling="2025-04-30 00:58:18.875348718 +0000 UTC m=+7.242308893" lastFinishedPulling="2025-04-30 00:58:26.371121909 +0000 UTC m=+14.738082084" observedRunningTime="2025-04-30 00:58:30.91473861 +0000 UTC m=+19.281698905" watchObservedRunningTime="2025-04-30 00:58:34.811606967 +0000 UTC m=+23.178567142" Apr 30 00:58:35.324878 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Apr 30 00:58:35.644834 systemd-networkd[1381]: lxc1a0d875cd93e: Gained IPv6LL Apr 30 00:58:35.645296 systemd-networkd[1381]: lxc_health: Gained IPv6LL Apr 30 00:58:36.413998 systemd-networkd[1381]: lxc747e42d0b10c: Gained IPv6LL Apr 30 00:58:38.684619 containerd[1485]: time="2025-04-30T00:58:38.684137516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:38.684954 containerd[1485]: time="2025-04-30T00:58:38.684655355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:38.684954 containerd[1485]: time="2025-04-30T00:58:38.684726355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:38.685104 containerd[1485]: time="2025-04-30T00:58:38.684974874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:38.708637 systemd[1]: run-containerd-runc-k8s.io-6ae15c1c0255920ae560a0e44f02e432f9343f1b7c83cc0efe52357600b0aa2d-runc.WAhpat.mount: Deactivated successfully. Apr 30 00:58:38.710672 containerd[1485]: time="2025-04-30T00:58:38.709846411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:58:38.710672 containerd[1485]: time="2025-04-30T00:58:38.709909971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:58:38.710672 containerd[1485]: time="2025-04-30T00:58:38.709926091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:38.710672 containerd[1485]: time="2025-04-30T00:58:38.710038491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:58:38.720448 systemd[1]: Started cri-containerd-6ae15c1c0255920ae560a0e44f02e432f9343f1b7c83cc0efe52357600b0aa2d.scope - libcontainer container 6ae15c1c0255920ae560a0e44f02e432f9343f1b7c83cc0efe52357600b0aa2d. Apr 30 00:58:38.753510 systemd[1]: Started cri-containerd-fa53a01f1d00e04b4c51e071603ca7273d568d461bc46c35677879534eceb880.scope - libcontainer container fa53a01f1d00e04b4c51e071603ca7273d568d461bc46c35677879534eceb880. Apr 30 00:58:38.804070 containerd[1485]: time="2025-04-30T00:58:38.803516496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f69d7,Uid:763e7098-e478-4644-b35e-7a42ec7e4d51,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae15c1c0255920ae560a0e44f02e432f9343f1b7c83cc0efe52357600b0aa2d\"" Apr 30 00:58:38.811560 containerd[1485]: time="2025-04-30T00:58:38.811508556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k2rp6,Uid:c831f29d-9dea-4c2c-a945-01af282987a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa53a01f1d00e04b4c51e071603ca7273d568d461bc46c35677879534eceb880\"" Apr 30 00:58:38.812139 containerd[1485]: time="2025-04-30T00:58:38.812107874Z" level=info msg="CreateContainer within sandbox \"6ae15c1c0255920ae560a0e44f02e432f9343f1b7c83cc0efe52357600b0aa2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:58:38.818397 containerd[1485]: time="2025-04-30T00:58:38.818180459Z" level=info msg="CreateContainer within sandbox \"fa53a01f1d00e04b4c51e071603ca7273d568d461bc46c35677879534eceb880\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:58:38.840314 containerd[1485]: time="2025-04-30T00:58:38.840197243Z" level=info msg="CreateContainer within sandbox \"6ae15c1c0255920ae560a0e44f02e432f9343f1b7c83cc0efe52357600b0aa2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7e59fe8d4e45106c6350b610cdcd2ecbd7ebc1b7689200d2916a3e8aee0113b\"" Apr 30 00:58:38.841932 containerd[1485]: time="2025-04-30T00:58:38.841883279Z" level=info msg="StartContainer for \"c7e59fe8d4e45106c6350b610cdcd2ecbd7ebc1b7689200d2916a3e8aee0113b\"" Apr 30 00:58:38.846318 containerd[1485]: time="2025-04-30T00:58:38.846236228Z" level=info msg="CreateContainer within sandbox \"fa53a01f1d00e04b4c51e071603ca7273d568d461bc46c35677879534eceb880\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"325cdaeeb320e2addfe8e7be2e034818ab67dd3dfdeda1f26bd11f9eafaad999\"" Apr 30 00:58:38.848311 containerd[1485]: time="2025-04-30T00:58:38.848127543Z" level=info msg="StartContainer for \"325cdaeeb320e2addfe8e7be2e034818ab67dd3dfdeda1f26bd11f9eafaad999\"" Apr 30 00:58:38.891441 systemd[1]: Started cri-containerd-325cdaeeb320e2addfe8e7be2e034818ab67dd3dfdeda1f26bd11f9eafaad999.scope - libcontainer container 325cdaeeb320e2addfe8e7be2e034818ab67dd3dfdeda1f26bd11f9eafaad999. Apr 30 00:58:38.894162 systemd[1]: Started cri-containerd-c7e59fe8d4e45106c6350b610cdcd2ecbd7ebc1b7689200d2916a3e8aee0113b.scope - libcontainer container c7e59fe8d4e45106c6350b610cdcd2ecbd7ebc1b7689200d2916a3e8aee0113b. Apr 30 00:58:38.944935 containerd[1485]: time="2025-04-30T00:58:38.943649703Z" level=info msg="StartContainer for \"325cdaeeb320e2addfe8e7be2e034818ab67dd3dfdeda1f26bd11f9eafaad999\" returns successfully" Apr 30 00:58:38.950399 containerd[1485]: time="2025-04-30T00:58:38.950166527Z" level=info msg="StartContainer for \"c7e59fe8d4e45106c6350b610cdcd2ecbd7ebc1b7689200d2916a3e8aee0113b\" returns successfully" Apr 30 00:58:39.944074 kubelet[2679]: I0430 00:58:39.943980 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f69d7" podStartSLOduration=22.943959953 podStartE2EDuration="22.943959953s" podCreationTimestamp="2025-04-30 00:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:58:39.942871434 +0000 UTC m=+28.309831609" watchObservedRunningTime="2025-04-30 00:58:39.943959953 +0000 UTC m=+28.310920128" Apr 30 00:58:39.963358 kubelet[2679]: I0430 00:58:39.963184 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k2rp6" podStartSLOduration=22.963165611 podStartE2EDuration="22.963165611s" podCreationTimestamp="2025-04-30 00:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:58:39.959666295 +0000 UTC m=+28.326626470" watchObservedRunningTime="2025-04-30 00:58:39.963165611 +0000 UTC m=+28.330125746" Apr 30 01:01:27.988377 update_engine[1461]: I20250430 01:01:27.987540 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 01:01:27.988377 update_engine[1461]: I20250430 01:01:27.987636 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 01:01:27.988377 update_engine[1461]: I20250430 01:01:27.988016 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989310 1461 omaha_request_params.cc:62] Current group set to lts Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989527 1461 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989554 1461 update_attempter.cc:643] Scheduling an action processor start. Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989595 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989674 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989805 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989822 1461 omaha_request_action.cc:272] Request: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.989836 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.991768 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:01:27.992440 update_engine[1461]: I20250430 01:01:27.992209 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:01:27.994238 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 01:01:27.994682 update_engine[1461]: E20250430 01:01:27.993717 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:01:27.994682 update_engine[1461]: I20250430 01:01:27.993815 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 01:01:37.900080 update_engine[1461]: I20250430 01:01:37.899981 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:01:37.900664 update_engine[1461]: I20250430 01:01:37.900360 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:01:37.900715 update_engine[1461]: I20250430 01:01:37.900665 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:01:37.901489 update_engine[1461]: E20250430 01:01:37.901423 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:01:37.901604 update_engine[1461]: I20250430 01:01:37.901525 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 01:01:47.901601 update_engine[1461]: I20250430 01:01:47.901456 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:01:47.902165 update_engine[1461]: I20250430 01:01:47.901858 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:01:47.902238 update_engine[1461]: I20250430 01:01:47.902181 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:01:47.903140 update_engine[1461]: E20250430 01:01:47.903051 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:01:47.903292 update_engine[1461]: I20250430 01:01:47.903160 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 01:01:54.440563 kubelet[2679]: I0430 01:01:54.440478 2679 ???:1] "http: TLS handshake error from 162.142.125.196:55520: read tcp 88.198.162.73:10250->162.142.125.196:55520: read: connection reset by peer" Apr 30 01:01:57.900583 update_engine[1461]: I20250430 01:01:57.900394 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:01:57.901119 update_engine[1461]: I20250430 01:01:57.900803 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:01:57.901236 update_engine[1461]: I20250430 01:01:57.901172 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:01:57.902158 update_engine[1461]: E20250430 01:01:57.901944 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:01:57.902158 update_engine[1461]: I20250430 01:01:57.902055 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 01:01:57.902158 update_engine[1461]: I20250430 01:01:57.902071 1461 omaha_request_action.cc:617] Omaha request response: Apr 30 01:01:57.902419 update_engine[1461]: E20250430 01:01:57.902176 1461 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 01:01:57.902419 update_engine[1461]: I20250430 01:01:57.902200 1461 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 01:01:57.902419 update_engine[1461]: I20250430 01:01:57.902208 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 01:01:57.902419 update_engine[1461]: I20250430 01:01:57.902215 1461 update_attempter.cc:306] Processing Done. Apr 30 01:01:57.902419 update_engine[1461]: E20250430 01:01:57.902245 1461 update_attempter.cc:619] Update failed. Apr 30 01:01:57.902419 update_engine[1461]: I20250430 01:01:57.902256 1461 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 01:01:57.902419 update_engine[1461]: I20250430 01:01:57.902281 1461 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 01:01:57.902419 update_engine[1461]: I20250430 01:01:57.902290 1461 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 01:01:57.902734 update_engine[1461]: I20250430 01:01:57.902529 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 01:01:57.902734 update_engine[1461]: I20250430 01:01:57.902570 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 01:01:57.902734 update_engine[1461]: I20250430 01:01:57.902580 1461 omaha_request_action.cc:272] Request: Apr 30 01:01:57.902734 update_engine[1461]: Apr 30 01:01:57.902734 update_engine[1461]: Apr 30 01:01:57.902734 update_engine[1461]: Apr 30 01:01:57.902734 update_engine[1461]: Apr 30 01:01:57.902734 update_engine[1461]: Apr 30 01:01:57.902734 update_engine[1461]: Apr 30 01:01:57.902734 update_engine[1461]: I20250430 01:01:57.902588 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:01:57.903098 update_engine[1461]: I20250430 01:01:57.902838 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:01:57.903152 update_engine[1461]: I20250430 01:01:57.903098 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:01:57.903493 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 01:01:57.903957 update_engine[1461]: E20250430 01:01:57.903908 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:01:57.903957 update_engine[1461]: I20250430 01:01:57.904011 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 01:01:57.903957 update_engine[1461]: I20250430 01:01:57.904031 1461 omaha_request_action.cc:617] Omaha request response: Apr 30 01:01:57.903957 update_engine[1461]: I20250430 01:01:57.904044 1461 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 01:01:57.904198 update_engine[1461]: I20250430 01:01:57.904054 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 01:01:57.904198 update_engine[1461]: I20250430 01:01:57.904063 1461 update_attempter.cc:306] Processing Done. Apr 30 01:01:57.904198 update_engine[1461]: I20250430 01:01:57.904076 1461 update_attempter.cc:310] Error event sent. Apr 30 01:01:57.904198 update_engine[1461]: I20250430 01:01:57.904095 1461 update_check_scheduler.cc:74] Next update check in 42m54s Apr 30 01:01:57.904718 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 01:02:25.477687 kubelet[2679]: I0430 01:02:25.477575 2679 ???:1] "http: TLS handshake error from 162.142.125.196:45298: tls: client offered only unsupported versions: [302 301]" Apr 30 01:02:28.841299 kubelet[2679]: I0430 01:02:28.841251 2679 ???:1] "http: TLS handshake error from 162.142.125.196:45304: tls: client offered only unsupported versions: [301]" Apr 30 01:02:32.591546 kubelet[2679]: I0430 01:02:32.591398 2679 ???:1] "http: TLS handshake error from 162.142.125.196:49824: tls: client offered only unsupported versions: []" Apr 30 01:02:50.796781 systemd[1]: Started sshd@7-88.198.162.73:22-139.178.68.195:50672.service - OpenSSH per-connection server daemon (139.178.68.195:50672). Apr 30 01:02:51.790293 sshd[4093]: Accepted publickey for core from 139.178.68.195 port 50672 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:51.791615 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:51.799630 systemd-logind[1460]: New session 8 of user core. Apr 30 01:02:51.802436 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 01:02:52.566455 sshd[4093]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:52.573298 systemd[1]: sshd@7-88.198.162.73:22-139.178.68.195:50672.service: Deactivated successfully. Apr 30 01:02:52.576282 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 01:02:52.577222 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Apr 30 01:02:52.578534 systemd-logind[1460]: Removed session 8. Apr 30 01:02:54.984818 kubelet[2679]: I0430 01:02:54.984747 2679 ???:1] "http: TLS handshake error from 162.142.125.196:46620: client sent an HTTP request to an HTTPS server" Apr 30 01:02:57.739656 systemd[1]: Started sshd@8-88.198.162.73:22-139.178.68.195:40390.service - OpenSSH per-connection server daemon (139.178.68.195:40390). Apr 30 01:02:58.735388 sshd[4108]: Accepted publickey for core from 139.178.68.195 port 40390 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:58.738842 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:58.745342 systemd-logind[1460]: New session 9 of user core. Apr 30 01:02:58.751054 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 01:02:59.497447 sshd[4108]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:59.502783 systemd[1]: sshd@8-88.198.162.73:22-139.178.68.195:40390.service: Deactivated successfully. Apr 30 01:02:59.505714 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 01:02:59.509076 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Apr 30 01:02:59.510530 systemd-logind[1460]: Removed session 9. Apr 30 01:03:04.674673 systemd[1]: Started sshd@9-88.198.162.73:22-139.178.68.195:40396.service - OpenSSH per-connection server daemon (139.178.68.195:40396). Apr 30 01:03:05.665496 sshd[4122]: Accepted publickey for core from 139.178.68.195 port 40396 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:05.668497 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:05.674422 systemd-logind[1460]: New session 10 of user core. Apr 30 01:03:05.678486 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 01:03:06.441753 sshd[4122]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:06.446417 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Apr 30 01:03:06.448810 systemd[1]: sshd@9-88.198.162.73:22-139.178.68.195:40396.service: Deactivated successfully. Apr 30 01:03:06.452102 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 01:03:06.454012 systemd-logind[1460]: Removed session 10. Apr 30 01:03:06.620865 systemd[1]: Started sshd@10-88.198.162.73:22-139.178.68.195:41296.service - OpenSSH per-connection server daemon (139.178.68.195:41296). Apr 30 01:03:07.605937 sshd[4135]: Accepted publickey for core from 139.178.68.195 port 41296 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:07.609624 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:07.615069 systemd-logind[1460]: New session 11 of user core. Apr 30 01:03:07.624980 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 01:03:08.402533 sshd[4135]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:08.407884 systemd[1]: sshd@10-88.198.162.73:22-139.178.68.195:41296.service: Deactivated successfully. Apr 30 01:03:08.410333 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 01:03:08.411365 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Apr 30 01:03:08.412830 systemd-logind[1460]: Removed session 11. Apr 30 01:03:08.584612 systemd[1]: Started sshd@11-88.198.162.73:22-139.178.68.195:41300.service - OpenSSH per-connection server daemon (139.178.68.195:41300). Apr 30 01:03:09.572002 sshd[4146]: Accepted publickey for core from 139.178.68.195 port 41300 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:09.574671 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:09.582010 systemd-logind[1460]: New session 12 of user core. Apr 30 01:03:09.592470 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 01:03:10.322660 sshd[4146]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:10.328551 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Apr 30 01:03:10.329321 systemd[1]: sshd@11-88.198.162.73:22-139.178.68.195:41300.service: Deactivated successfully. Apr 30 01:03:10.331980 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 01:03:10.333641 systemd-logind[1460]: Removed session 12. Apr 30 01:03:15.509783 systemd[1]: Started sshd@12-88.198.162.73:22-139.178.68.195:42120.service - OpenSSH per-connection server daemon (139.178.68.195:42120). Apr 30 01:03:16.485699 sshd[4161]: Accepted publickey for core from 139.178.68.195 port 42120 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:16.488087 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:16.494995 systemd-logind[1460]: New session 13 of user core. Apr 30 01:03:16.498445 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 01:03:17.242155 sshd[4161]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:17.246411 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Apr 30 01:03:17.246583 systemd[1]: sshd@12-88.198.162.73:22-139.178.68.195:42120.service: Deactivated successfully. Apr 30 01:03:17.248933 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 01:03:17.252039 systemd-logind[1460]: Removed session 13. Apr 30 01:03:17.422629 systemd[1]: Started sshd@13-88.198.162.73:22-139.178.68.195:42134.service - OpenSSH per-connection server daemon (139.178.68.195:42134). Apr 30 01:03:18.416722 sshd[4174]: Accepted publickey for core from 139.178.68.195 port 42134 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:18.418866 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:18.423972 systemd-logind[1460]: New session 14 of user core. Apr 30 01:03:18.429503 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 01:03:19.225751 sshd[4174]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:19.231935 systemd[1]: sshd@13-88.198.162.73:22-139.178.68.195:42134.service: Deactivated successfully. Apr 30 01:03:19.235475 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 01:03:19.236653 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Apr 30 01:03:19.238016 systemd-logind[1460]: Removed session 14. Apr 30 01:03:19.397561 systemd[1]: Started sshd@14-88.198.162.73:22-139.178.68.195:42138.service - OpenSSH per-connection server daemon (139.178.68.195:42138). Apr 30 01:03:20.372908 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 42138 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:20.374942 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:20.381158 systemd-logind[1460]: New session 15 of user core. Apr 30 01:03:20.387475 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 01:03:22.072923 sshd[4187]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:22.077945 systemd[1]: sshd@14-88.198.162.73:22-139.178.68.195:42138.service: Deactivated successfully. Apr 30 01:03:22.081579 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 01:03:22.082507 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Apr 30 01:03:22.084118 systemd-logind[1460]: Removed session 15. Apr 30 01:03:22.258910 systemd[1]: Started sshd@15-88.198.162.73:22-139.178.68.195:42144.service - OpenSSH per-connection server daemon (139.178.68.195:42144). Apr 30 01:03:23.249913 sshd[4206]: Accepted publickey for core from 139.178.68.195 port 42144 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:23.252474 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:23.259645 systemd-logind[1460]: New session 16 of user core. Apr 30 01:03:23.266726 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 01:03:24.146315 sshd[4206]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:24.151698 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Apr 30 01:03:24.153892 systemd[1]: sshd@15-88.198.162.73:22-139.178.68.195:42144.service: Deactivated successfully. Apr 30 01:03:24.157019 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 01:03:24.158766 systemd-logind[1460]: Removed session 16. Apr 30 01:03:24.328591 systemd[1]: Started sshd@16-88.198.162.73:22-139.178.68.195:42158.service - OpenSSH per-connection server daemon (139.178.68.195:42158). Apr 30 01:03:25.327314 sshd[4217]: Accepted publickey for core from 139.178.68.195 port 42158 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:25.329212 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:25.335915 systemd-logind[1460]: New session 17 of user core. Apr 30 01:03:25.339450 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 01:03:26.081567 sshd[4217]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:26.087214 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Apr 30 01:03:26.087673 systemd[1]: sshd@16-88.198.162.73:22-139.178.68.195:42158.service: Deactivated successfully. Apr 30 01:03:26.090127 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 01:03:26.092184 systemd-logind[1460]: Removed session 17. Apr 30 01:03:31.260719 systemd[1]: Started sshd@17-88.198.162.73:22-139.178.68.195:36438.service - OpenSSH per-connection server daemon (139.178.68.195:36438). Apr 30 01:03:32.247354 sshd[4231]: Accepted publickey for core from 139.178.68.195 port 36438 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:32.248797 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:32.255361 systemd-logind[1460]: New session 18 of user core. Apr 30 01:03:32.259476 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 01:03:33.001506 sshd[4231]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:33.007082 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Apr 30 01:03:33.007883 systemd[1]: sshd@17-88.198.162.73:22-139.178.68.195:36438.service: Deactivated successfully. Apr 30 01:03:33.012147 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 01:03:33.015127 systemd-logind[1460]: Removed session 18. Apr 30 01:03:38.177332 systemd[1]: Started sshd@18-88.198.162.73:22-139.178.68.195:38556.service - OpenSSH per-connection server daemon (139.178.68.195:38556). Apr 30 01:03:39.159291 sshd[4244]: Accepted publickey for core from 139.178.68.195 port 38556 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:39.162619 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:39.170543 systemd-logind[1460]: New session 19 of user core. Apr 30 01:03:39.179566 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 01:03:39.905787 sshd[4244]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:39.911496 systemd[1]: sshd@18-88.198.162.73:22-139.178.68.195:38556.service: Deactivated successfully. Apr 30 01:03:39.915381 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 01:03:39.916798 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Apr 30 01:03:39.917762 systemd-logind[1460]: Removed session 19. Apr 30 01:03:40.084625 systemd[1]: Started sshd@19-88.198.162.73:22-139.178.68.195:38572.service - OpenSSH per-connection server daemon (139.178.68.195:38572). Apr 30 01:03:41.053548 sshd[4259]: Accepted publickey for core from 139.178.68.195 port 38572 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:41.055558 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:41.060194 systemd-logind[1460]: New session 20 of user core. Apr 30 01:03:41.067526 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 01:03:43.805635 containerd[1485]: time="2025-04-30T01:03:43.803907432Z" level=info msg="StopContainer for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" with timeout 30 (s)" Apr 30 01:03:43.808851 systemd[1]: run-containerd-runc-k8s.io-3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566-runc.mHp2o4.mount: Deactivated successfully. Apr 30 01:03:43.811773 containerd[1485]: time="2025-04-30T01:03:43.806546082Z" level=info msg="Stop container \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" with signal terminated" Apr 30 01:03:43.829833 containerd[1485]: time="2025-04-30T01:03:43.829784116Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:03:43.841463 systemd[1]: cri-containerd-277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86.scope: Deactivated successfully. Apr 30 01:03:43.844295 containerd[1485]: time="2025-04-30T01:03:43.844133887Z" level=info msg="StopContainer for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" with timeout 2 (s)" Apr 30 01:03:43.844804 containerd[1485]: time="2025-04-30T01:03:43.844749428Z" level=info msg="Stop container \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" with signal terminated" Apr 30 01:03:43.856922 systemd-networkd[1381]: lxc_health: Link DOWN Apr 30 01:03:43.856929 systemd-networkd[1381]: lxc_health: Lost carrier Apr 30 01:03:43.877058 systemd[1]: cri-containerd-3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566.scope: Deactivated successfully. Apr 30 01:03:43.877895 systemd[1]: cri-containerd-3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566.scope: Consumed 8.004s CPU time. Apr 30 01:03:43.885929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86-rootfs.mount: Deactivated successfully. Apr 30 01:03:43.898603 containerd[1485]: time="2025-04-30T01:03:43.898441862Z" level=info msg="shim disconnected" id=277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86 namespace=k8s.io Apr 30 01:03:43.898603 containerd[1485]: time="2025-04-30T01:03:43.898500304Z" level=warning msg="cleaning up after shim disconnected" id=277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86 namespace=k8s.io Apr 30 01:03:43.898603 containerd[1485]: time="2025-04-30T01:03:43.898508465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:43.914186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566-rootfs.mount: Deactivated successfully. Apr 30 01:03:43.923303 containerd[1485]: time="2025-04-30T01:03:43.923056463Z" level=info msg="shim disconnected" id=3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566 namespace=k8s.io Apr 30 01:03:43.923563 containerd[1485]: time="2025-04-30T01:03:43.923199468Z" level=warning msg="cleaning up after shim disconnected" id=3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566 namespace=k8s.io Apr 30 01:03:43.923921 containerd[1485]: time="2025-04-30T01:03:43.923416276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:43.928725 containerd[1485]: time="2025-04-30T01:03:43.928688736Z" level=info msg="StopContainer for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" returns successfully" Apr 30 01:03:43.929604 containerd[1485]: time="2025-04-30T01:03:43.929577246Z" level=info msg="StopPodSandbox for \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\"" Apr 30 01:03:43.929691 containerd[1485]: time="2025-04-30T01:03:43.929622848Z" level=info msg="Container to stop \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:03:43.933009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad-shm.mount: Deactivated successfully. Apr 30 01:03:43.942275 systemd[1]: cri-containerd-d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad.scope: Deactivated successfully. Apr 30 01:03:43.950235 containerd[1485]: time="2025-04-30T01:03:43.950177750Z" level=info msg="StopContainer for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" returns successfully" Apr 30 01:03:43.950860 containerd[1485]: time="2025-04-30T01:03:43.950704488Z" level=info msg="StopPodSandbox for \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\"" Apr 30 01:03:43.950860 containerd[1485]: time="2025-04-30T01:03:43.950753530Z" level=info msg="Container to stop \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:03:43.950860 containerd[1485]: time="2025-04-30T01:03:43.950767250Z" level=info msg="Container to stop \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:03:43.950860 containerd[1485]: time="2025-04-30T01:03:43.950777931Z" level=info msg="Container to stop \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:03:43.950860 containerd[1485]: time="2025-04-30T01:03:43.950790211Z" level=info msg="Container to stop \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:03:43.950860 containerd[1485]: time="2025-04-30T01:03:43.950800171Z" level=info msg="Container to stop \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:03:43.959751 systemd[1]: cri-containerd-2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35.scope: Deactivated successfully. Apr 30 01:03:43.976830 containerd[1485]: time="2025-04-30T01:03:43.976676696Z" level=info msg="shim disconnected" id=d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad namespace=k8s.io Apr 30 01:03:43.976830 containerd[1485]: time="2025-04-30T01:03:43.976735738Z" level=warning msg="cleaning up after shim disconnected" id=d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad namespace=k8s.io Apr 30 01:03:43.976830 containerd[1485]: time="2025-04-30T01:03:43.976744978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:43.988813 containerd[1485]: time="2025-04-30T01:03:43.988594383Z" level=info msg="shim disconnected" id=2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35 namespace=k8s.io Apr 30 01:03:43.988813 containerd[1485]: time="2025-04-30T01:03:43.988708467Z" level=warning msg="cleaning up after shim disconnected" id=2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35 namespace=k8s.io Apr 30 01:03:43.988813 containerd[1485]: time="2025-04-30T01:03:43.988717547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:44.000656 containerd[1485]: time="2025-04-30T01:03:44.000452268Z" level=info msg="TearDown network for sandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" successfully" Apr 30 01:03:44.000656 containerd[1485]: time="2025-04-30T01:03:44.000492710Z" level=info msg="StopPodSandbox for \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" returns successfully" Apr 30 01:03:44.009268 containerd[1485]: time="2025-04-30T01:03:44.008169092Z" level=info msg="TearDown network for sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" successfully" Apr 30 01:03:44.009268 containerd[1485]: time="2025-04-30T01:03:44.008211614Z" level=info msg="StopPodSandbox for \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" returns successfully" Apr 30 01:03:44.153649 kubelet[2679]: I0430 01:03:44.152243 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9682ec3-8b06-480d-8336-a286215ab182-cilium-config-path\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.153649 kubelet[2679]: I0430 01:03:44.152324 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9682ec3-8b06-480d-8336-a286215ab182-clustermesh-secrets\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.153649 kubelet[2679]: I0430 01:03:44.152354 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-hostproc\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.153649 kubelet[2679]: I0430 01:03:44.152417 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-hubble-tls\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.153649 kubelet[2679]: I0430 01:03:44.152441 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-xtables-lock\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.153649 kubelet[2679]: I0430 01:03:44.152466 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p7cj\" (UniqueName: \"kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-kube-api-access-4p7cj\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.155122 kubelet[2679]: I0430 01:03:44.152491 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-kernel\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.155122 kubelet[2679]: I0430 01:03:44.152515 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cni-path\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.155122 kubelet[2679]: I0430 01:03:44.152541 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnbpd\" (UniqueName: \"kubernetes.io/projected/0515d739-6d92-4318-a36a-6a9e3cd51ecf-kube-api-access-fnbpd\") pod \"0515d739-6d92-4318-a36a-6a9e3cd51ecf\" (UID: \"0515d739-6d92-4318-a36a-6a9e3cd51ecf\") " Apr 30 01:03:44.155122 kubelet[2679]: I0430 01:03:44.152567 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0515d739-6d92-4318-a36a-6a9e3cd51ecf-cilium-config-path\") pod \"0515d739-6d92-4318-a36a-6a9e3cd51ecf\" (UID: \"0515d739-6d92-4318-a36a-6a9e3cd51ecf\") " Apr 30 01:03:44.155122 kubelet[2679]: I0430 01:03:44.152593 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-etc-cni-netd\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.155122 kubelet[2679]: I0430 01:03:44.152617 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-net\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.158065 kubelet[2679]: I0430 01:03:44.152639 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-lib-modules\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.158065 kubelet[2679]: I0430 01:03:44.152665 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-cgroup\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.158065 kubelet[2679]: I0430 01:03:44.152722 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-run\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.158065 kubelet[2679]: I0430 01:03:44.152747 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-bpf-maps\") pod \"f9682ec3-8b06-480d-8336-a286215ab182\" (UID: \"f9682ec3-8b06-480d-8336-a286215ab182\") " Apr 30 01:03:44.158065 kubelet[2679]: I0430 01:03:44.152847 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.158065 kubelet[2679]: I0430 01:03:44.153327 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cni-path" (OuterVolumeSpecName: "cni-path") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.158212 kubelet[2679]: I0430 01:03:44.155682 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-hostproc" (OuterVolumeSpecName: "hostproc") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.158212 kubelet[2679]: I0430 01:03:44.156964 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.158634 kubelet[2679]: I0430 01:03:44.158598 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.161563 kubelet[2679]: I0430 01:03:44.161510 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9682ec3-8b06-480d-8336-a286215ab182-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 01:03:44.161712 kubelet[2679]: I0430 01:03:44.161641 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.161712 kubelet[2679]: I0430 01:03:44.161662 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.161837 kubelet[2679]: I0430 01:03:44.161812 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.161869 kubelet[2679]: I0430 01:03:44.161849 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.161869 kubelet[2679]: I0430 01:03:44.161677 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:03:44.163186 kubelet[2679]: I0430 01:03:44.163151 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-kube-api-access-4p7cj" (OuterVolumeSpecName: "kube-api-access-4p7cj") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "kube-api-access-4p7cj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 01:03:44.163540 kubelet[2679]: I0430 01:03:44.163458 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0515d739-6d92-4318-a36a-6a9e3cd51ecf-kube-api-access-fnbpd" (OuterVolumeSpecName: "kube-api-access-fnbpd") pod "0515d739-6d92-4318-a36a-6a9e3cd51ecf" (UID: "0515d739-6d92-4318-a36a-6a9e3cd51ecf"). InnerVolumeSpecName "kube-api-access-fnbpd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 01:03:44.163592 kubelet[2679]: I0430 01:03:44.163552 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9682ec3-8b06-480d-8336-a286215ab182-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 01:03:44.163647 kubelet[2679]: I0430 01:03:44.163624 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0515d739-6d92-4318-a36a-6a9e3cd51ecf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0515d739-6d92-4318-a36a-6a9e3cd51ecf" (UID: "0515d739-6d92-4318-a36a-6a9e3cd51ecf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 01:03:44.163703 kubelet[2679]: I0430 01:03:44.163658 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f9682ec3-8b06-480d-8336-a286215ab182" (UID: "f9682ec3-8b06-480d-8336-a286215ab182"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 01:03:44.253891 kubelet[2679]: I0430 01:03:44.253814 2679 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-bpf-maps\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.253891 kubelet[2679]: I0430 01:03:44.253868 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9682ec3-8b06-480d-8336-a286215ab182-cilium-config-path\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.253891 kubelet[2679]: I0430 01:03:44.253892 2679 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9682ec3-8b06-480d-8336-a286215ab182-clustermesh-secrets\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.253910 2679 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-hubble-tls\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.253926 2679 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-hostproc\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.253942 2679 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-xtables-lock\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.253960 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4p7cj\" (UniqueName: \"kubernetes.io/projected/f9682ec3-8b06-480d-8336-a286215ab182-kube-api-access-4p7cj\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.253979 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-kernel\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.253996 2679 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cni-path\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.254013 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnbpd\" (UniqueName: \"kubernetes.io/projected/0515d739-6d92-4318-a36a-6a9e3cd51ecf-kube-api-access-fnbpd\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254146 kubelet[2679]: I0430 01:03:44.254029 2679 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-etc-cni-netd\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254555 kubelet[2679]: I0430 01:03:44.254048 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-host-proc-sys-net\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254555 kubelet[2679]: I0430 01:03:44.254065 2679 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-lib-modules\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254555 kubelet[2679]: I0430 01:03:44.254082 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0515d739-6d92-4318-a36a-6a9e3cd51ecf-cilium-config-path\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254555 kubelet[2679]: I0430 01:03:44.254098 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-run\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.254555 kubelet[2679]: I0430 01:03:44.254113 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9682ec3-8b06-480d-8336-a286215ab182-cilium-cgroup\") on node \"ci-4081-3-3-a-adb74c37b4\" DevicePath \"\"" Apr 30 01:03:44.720988 kubelet[2679]: I0430 01:03:44.720124 2679 scope.go:117] "RemoveContainer" containerID="3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566" Apr 30 01:03:44.723160 containerd[1485]: time="2025-04-30T01:03:44.723120540Z" level=info msg="RemoveContainer for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\"" Apr 30 01:03:44.735359 systemd[1]: Removed slice kubepods-burstable-podf9682ec3_8b06_480d_8336_a286215ab182.slice - libcontainer container kubepods-burstable-podf9682ec3_8b06_480d_8336_a286215ab182.slice. Apr 30 01:03:44.735522 systemd[1]: kubepods-burstable-podf9682ec3_8b06_480d_8336_a286215ab182.slice: Consumed 8.090s CPU time. Apr 30 01:03:44.739335 systemd[1]: Removed slice kubepods-besteffort-pod0515d739_6d92_4318_a36a_6a9e3cd51ecf.slice - libcontainer container kubepods-besteffort-pod0515d739_6d92_4318_a36a_6a9e3cd51ecf.slice. Apr 30 01:03:44.741946 containerd[1485]: time="2025-04-30T01:03:44.741840740Z" level=info msg="RemoveContainer for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" returns successfully" Apr 30 01:03:44.742874 kubelet[2679]: I0430 01:03:44.742364 2679 scope.go:117] "RemoveContainer" containerID="0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442" Apr 30 01:03:44.745690 containerd[1485]: time="2025-04-30T01:03:44.744794681Z" level=info msg="RemoveContainer for \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\"" Apr 30 01:03:44.749171 containerd[1485]: time="2025-04-30T01:03:44.749129149Z" level=info msg="RemoveContainer for \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\" returns successfully" Apr 30 01:03:44.749625 kubelet[2679]: I0430 01:03:44.749604 2679 scope.go:117] "RemoveContainer" containerID="bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54" Apr 30 01:03:44.751095 containerd[1485]: time="2025-04-30T01:03:44.751062735Z" level=info msg="RemoveContainer for \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\"" Apr 30 01:03:44.754978 containerd[1485]: time="2025-04-30T01:03:44.754931627Z" level=info msg="RemoveContainer for \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\" returns successfully" Apr 30 01:03:44.758544 kubelet[2679]: I0430 01:03:44.757515 2679 scope.go:117] "RemoveContainer" containerID="311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c" Apr 30 01:03:44.762302 containerd[1485]: time="2025-04-30T01:03:44.761001115Z" level=info msg="RemoveContainer for \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\"" Apr 30 01:03:44.767896 containerd[1485]: time="2025-04-30T01:03:44.766584626Z" level=info msg="RemoveContainer for \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\" returns successfully" Apr 30 01:03:44.768010 kubelet[2679]: I0430 01:03:44.767449 2679 scope.go:117] "RemoveContainer" containerID="d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613" Apr 30 01:03:44.770980 containerd[1485]: time="2025-04-30T01:03:44.770938575Z" level=info msg="RemoveContainer for \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\"" Apr 30 01:03:44.775090 containerd[1485]: time="2025-04-30T01:03:44.775046555Z" level=info msg="RemoveContainer for \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\" returns successfully" Apr 30 01:03:44.775435 kubelet[2679]: I0430 01:03:44.775367 2679 scope.go:117] "RemoveContainer" containerID="3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566" Apr 30 01:03:44.775732 containerd[1485]: time="2025-04-30T01:03:44.775597414Z" level=error msg="ContainerStatus for \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\": not found" Apr 30 01:03:44.777062 kubelet[2679]: E0430 01:03:44.775835 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\": not found" containerID="3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566" Apr 30 01:03:44.777062 kubelet[2679]: I0430 01:03:44.775864 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566"} err="failed to get container status \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\": rpc error: code = NotFound desc = an error occurred when try to find container \"3996524ae489f385d28a2b58d10c10a65638715ddcd28d6912668522411dd566\": not found" Apr 30 01:03:44.777062 kubelet[2679]: I0430 01:03:44.775962 2679 scope.go:117] "RemoveContainer" containerID="0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442" Apr 30 01:03:44.777563 containerd[1485]: time="2025-04-30T01:03:44.777421996Z" level=error msg="ContainerStatus for \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\": not found" Apr 30 01:03:44.777701 kubelet[2679]: E0430 01:03:44.777664 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\": not found" containerID="0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442" Apr 30 01:03:44.777740 kubelet[2679]: I0430 01:03:44.777701 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442"} err="failed to get container status \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cab5484701f0e86a6f3bc817fafb10fe2270b588d76de157fbcc2d384951442\": not found" Apr 30 01:03:44.777766 kubelet[2679]: I0430 01:03:44.777741 2679 scope.go:117] "RemoveContainer" containerID="bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54" Apr 30 01:03:44.778361 containerd[1485]: time="2025-04-30T01:03:44.778292186Z" level=error msg="ContainerStatus for \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\": not found" Apr 30 01:03:44.779185 containerd[1485]: time="2025-04-30T01:03:44.778770162Z" level=error msg="ContainerStatus for \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\": not found" Apr 30 01:03:44.779228 kubelet[2679]: E0430 01:03:44.778519 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\": not found" containerID="bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54" Apr 30 01:03:44.779228 kubelet[2679]: I0430 01:03:44.778552 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54"} err="failed to get container status \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd386815d411a847cc3cd0764bb0f916e309e9550da38c8068b14b046a40fe54\": not found" Apr 30 01:03:44.779228 kubelet[2679]: I0430 01:03:44.778580 2679 scope.go:117] "RemoveContainer" containerID="311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c" Apr 30 01:03:44.779228 kubelet[2679]: E0430 01:03:44.778986 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\": not found" containerID="311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c" Apr 30 01:03:44.779228 kubelet[2679]: I0430 01:03:44.779014 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c"} err="failed to get container status \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"311cd946d26aa779a27634f40adbc86da20d9a1c128e6325ea634a064abdbb4c\": not found" Apr 30 01:03:44.779228 kubelet[2679]: I0430 01:03:44.779038 2679 scope.go:117] "RemoveContainer" containerID="d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613" Apr 30 01:03:44.779406 containerd[1485]: time="2025-04-30T01:03:44.779288220Z" level=error msg="ContainerStatus for \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\": not found" Apr 30 01:03:44.779952 kubelet[2679]: E0430 01:03:44.779489 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\": not found" containerID="d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613" Apr 30 01:03:44.779952 kubelet[2679]: I0430 01:03:44.779537 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613"} err="failed to get container status \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3498da2e6b9496dc614651d7895f9184596f90a56753f4153547e91d77bd613\": not found" Apr 30 01:03:44.779952 kubelet[2679]: I0430 01:03:44.779562 2679 scope.go:117] "RemoveContainer" containerID="277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86" Apr 30 01:03:44.783011 containerd[1485]: time="2025-04-30T01:03:44.782716497Z" level=info msg="RemoveContainer for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\"" Apr 30 01:03:44.786440 containerd[1485]: time="2025-04-30T01:03:44.786249298Z" level=info msg="RemoveContainer for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" returns successfully" Apr 30 01:03:44.786804 kubelet[2679]: I0430 01:03:44.786718 2679 scope.go:117] "RemoveContainer" containerID="277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86" Apr 30 01:03:44.787188 containerd[1485]: time="2025-04-30T01:03:44.787132328Z" level=error msg="ContainerStatus for \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\": not found" Apr 30 01:03:44.787378 kubelet[2679]: E0430 01:03:44.787320 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\": not found" containerID="277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86" Apr 30 01:03:44.787378 kubelet[2679]: I0430 01:03:44.787346 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86"} err="failed to get container status \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\": rpc error: code = NotFound desc = an error occurred when try to find container \"277ef2f89eb1adda336da0e9ebfdb2131cc88fcc9408aa004afd73fe861ddf86\": not found" Apr 30 01:03:44.805047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad-rootfs.mount: Deactivated successfully. Apr 30 01:03:44.805157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35-rootfs.mount: Deactivated successfully. Apr 30 01:03:44.805213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35-shm.mount: Deactivated successfully. Apr 30 01:03:44.805285 systemd[1]: var-lib-kubelet-pods-0515d739\x2d6d92\x2d4318\x2da36a\x2d6a9e3cd51ecf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfnbpd.mount: Deactivated successfully. Apr 30 01:03:44.805340 systemd[1]: var-lib-kubelet-pods-f9682ec3\x2d8b06\x2d480d\x2d8336\x2da286215ab182-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 01:03:44.805403 systemd[1]: var-lib-kubelet-pods-f9682ec3\x2d8b06\x2d480d\x2d8336\x2da286215ab182-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4p7cj.mount: Deactivated successfully. Apr 30 01:03:44.805460 systemd[1]: var-lib-kubelet-pods-f9682ec3\x2d8b06\x2d480d\x2d8336\x2da286215ab182-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 01:03:45.759309 kubelet[2679]: I0430 01:03:45.758089 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0515d739-6d92-4318-a36a-6a9e3cd51ecf" path="/var/lib/kubelet/pods/0515d739-6d92-4318-a36a-6a9e3cd51ecf/volumes" Apr 30 01:03:45.759309 kubelet[2679]: I0430 01:03:45.759038 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9682ec3-8b06-480d-8336-a286215ab182" path="/var/lib/kubelet/pods/f9682ec3-8b06-480d-8336-a286215ab182/volumes" Apr 30 01:03:45.878693 sshd[4259]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:45.884509 systemd[1]: sshd@19-88.198.162.73:22-139.178.68.195:38572.service: Deactivated successfully. Apr 30 01:03:45.887023 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 01:03:45.887300 systemd[1]: session-20.scope: Consumed 1.572s CPU time. Apr 30 01:03:45.888772 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Apr 30 01:03:45.890788 systemd-logind[1460]: Removed session 20. Apr 30 01:03:46.058591 systemd[1]: Started sshd@20-88.198.162.73:22-139.178.68.195:44102.service - OpenSSH per-connection server daemon (139.178.68.195:44102). Apr 30 01:03:46.935326 kubelet[2679]: E0430 01:03:46.935241 2679 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 01:03:47.036738 sshd[4428]: Accepted publickey for core from 139.178.68.195 port 44102 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:47.038724 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:47.043560 systemd-logind[1460]: New session 21 of user core. Apr 30 01:03:47.055582 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 01:03:48.640974 kubelet[2679]: I0430 01:03:48.640928 2679 memory_manager.go:355] "RemoveStaleState removing state" podUID="0515d739-6d92-4318-a36a-6a9e3cd51ecf" containerName="cilium-operator" Apr 30 01:03:48.640974 kubelet[2679]: I0430 01:03:48.640960 2679 memory_manager.go:355] "RemoveStaleState removing state" podUID="f9682ec3-8b06-480d-8336-a286215ab182" containerName="cilium-agent" Apr 30 01:03:48.651741 systemd[1]: Created slice kubepods-burstable-pod012c9c21_1c9c_47de_876e_fde3cc962e9f.slice - libcontainer container kubepods-burstable-pod012c9c21_1c9c_47de_876e_fde3cc962e9f.slice. Apr 30 01:03:48.784990 kubelet[2679]: I0430 01:03:48.784858 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-host-proc-sys-net\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.784990 kubelet[2679]: I0430 01:03:48.784978 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-lib-modules\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785241 kubelet[2679]: I0430 01:03:48.785014 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-cilium-run\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785241 kubelet[2679]: I0430 01:03:48.785068 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-cni-path\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785241 kubelet[2679]: I0430 01:03:48.785123 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-xtables-lock\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785241 kubelet[2679]: I0430 01:03:48.785150 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/012c9c21-1c9c-47de-876e-fde3cc962e9f-cilium-ipsec-secrets\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785241 kubelet[2679]: I0430 01:03:48.785199 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7254\" (UniqueName: \"kubernetes.io/projected/012c9c21-1c9c-47de-876e-fde3cc962e9f-kube-api-access-b7254\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785241 kubelet[2679]: I0430 01:03:48.785229 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/012c9c21-1c9c-47de-876e-fde3cc962e9f-hubble-tls\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785693 kubelet[2679]: I0430 01:03:48.785327 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-cilium-cgroup\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785693 kubelet[2679]: I0430 01:03:48.785365 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-host-proc-sys-kernel\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785693 kubelet[2679]: I0430 01:03:48.785396 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-etc-cni-netd\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785693 kubelet[2679]: I0430 01:03:48.785423 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-bpf-maps\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785693 kubelet[2679]: I0430 01:03:48.785449 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/012c9c21-1c9c-47de-876e-fde3cc962e9f-hostproc\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.785693 kubelet[2679]: I0430 01:03:48.785480 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/012c9c21-1c9c-47de-876e-fde3cc962e9f-cilium-config-path\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.786124 kubelet[2679]: I0430 01:03:48.785527 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/012c9c21-1c9c-47de-876e-fde3cc962e9f-clustermesh-secrets\") pod \"cilium-2cghc\" (UID: \"012c9c21-1c9c-47de-876e-fde3cc962e9f\") " pod="kube-system/cilium-2cghc" Apr 30 01:03:48.806627 sshd[4428]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:48.812232 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Apr 30 01:03:48.813420 systemd[1]: sshd@20-88.198.162.73:22-139.178.68.195:44102.service: Deactivated successfully. Apr 30 01:03:48.816097 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 01:03:48.817868 systemd-logind[1460]: Removed session 21. Apr 30 01:03:48.959580 containerd[1485]: time="2025-04-30T01:03:48.959191888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cghc,Uid:012c9c21-1c9c-47de-876e-fde3cc962e9f,Namespace:kube-system,Attempt:0,}" Apr 30 01:03:48.988416 containerd[1485]: time="2025-04-30T01:03:48.988113600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:03:48.990396 containerd[1485]: time="2025-04-30T01:03:48.988561735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:03:48.990396 containerd[1485]: time="2025-04-30T01:03:48.988637738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:03:48.990396 containerd[1485]: time="2025-04-30T01:03:48.988842585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:03:48.991639 systemd[1]: Started sshd@21-88.198.162.73:22-139.178.68.195:44108.service - OpenSSH per-connection server daemon (139.178.68.195:44108). Apr 30 01:03:49.014447 systemd[1]: Started cri-containerd-306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570.scope - libcontainer container 306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570. Apr 30 01:03:49.039804 containerd[1485]: time="2025-04-30T01:03:49.039749251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cghc,Uid:012c9c21-1c9c-47de-876e-fde3cc962e9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\"" Apr 30 01:03:49.043605 containerd[1485]: time="2025-04-30T01:03:49.043560982Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 01:03:49.056605 containerd[1485]: time="2025-04-30T01:03:49.056431103Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097\"" Apr 30 01:03:49.058896 containerd[1485]: time="2025-04-30T01:03:49.058846586Z" level=info msg="StartContainer for \"356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097\"" Apr 30 01:03:49.089456 systemd[1]: Started cri-containerd-356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097.scope - libcontainer container 356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097. Apr 30 01:03:49.118046 containerd[1485]: time="2025-04-30T01:03:49.117348233Z" level=info msg="StartContainer for \"356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097\" returns successfully" Apr 30 01:03:49.128816 systemd[1]: cri-containerd-356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097.scope: Deactivated successfully. Apr 30 01:03:49.156761 kubelet[2679]: I0430 01:03:49.156359 2679 setters.go:602] "Node became not ready" node="ci-4081-3-3-a-adb74c37b4" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T01:03:49Z","lastTransitionTime":"2025-04-30T01:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 01:03:49.174985 containerd[1485]: time="2025-04-30T01:03:49.174678040Z" level=info msg="shim disconnected" id=356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097 namespace=k8s.io Apr 30 01:03:49.175429 containerd[1485]: time="2025-04-30T01:03:49.175145776Z" level=warning msg="cleaning up after shim disconnected" id=356743e0d39f420c7073347cbbfbf8016d6a2c32531a1f03ff5fd1609cba5097 namespace=k8s.io Apr 30 01:03:49.175429 containerd[1485]: time="2025-04-30T01:03:49.175165216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:49.745239 containerd[1485]: time="2025-04-30T01:03:49.745180690Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 01:03:49.761352 containerd[1485]: time="2025-04-30T01:03:49.761292243Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b\"" Apr 30 01:03:49.762146 containerd[1485]: time="2025-04-30T01:03:49.761980386Z" level=info msg="StartContainer for \"05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b\"" Apr 30 01:03:49.792503 systemd[1]: Started cri-containerd-05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b.scope - libcontainer container 05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b. Apr 30 01:03:49.824070 containerd[1485]: time="2025-04-30T01:03:49.823992833Z" level=info msg="StartContainer for \"05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b\" returns successfully" Apr 30 01:03:49.835043 systemd[1]: cri-containerd-05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b.scope: Deactivated successfully. Apr 30 01:03:49.861638 containerd[1485]: time="2025-04-30T01:03:49.861556682Z" level=info msg="shim disconnected" id=05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b namespace=k8s.io Apr 30 01:03:49.861638 containerd[1485]: time="2025-04-30T01:03:49.861635445Z" level=warning msg="cleaning up after shim disconnected" id=05658c649c6b866ebb4f6ee0e8967f6563e29100db660ca3940367c1336f1e4b namespace=k8s.io Apr 30 01:03:49.861985 containerd[1485]: time="2025-04-30T01:03:49.861653605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:49.984052 sshd[4453]: Accepted publickey for core from 139.178.68.195 port 44108 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:49.986703 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:49.991489 systemd-logind[1460]: New session 22 of user core. Apr 30 01:03:50.002459 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 01:03:50.663015 sshd[4453]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:50.670057 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Apr 30 01:03:50.670447 systemd[1]: sshd@21-88.198.162.73:22-139.178.68.195:44108.service: Deactivated successfully. Apr 30 01:03:50.673219 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 01:03:50.674740 systemd-logind[1460]: Removed session 22. Apr 30 01:03:50.751095 containerd[1485]: time="2025-04-30T01:03:50.750489511Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 01:03:50.770586 containerd[1485]: time="2025-04-30T01:03:50.770224589Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5\"" Apr 30 01:03:50.771249 containerd[1485]: time="2025-04-30T01:03:50.771063738Z" level=info msg="StartContainer for \"e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5\"" Apr 30 01:03:50.807469 systemd[1]: Started cri-containerd-e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5.scope - libcontainer container e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5. Apr 30 01:03:50.841647 systemd[1]: Started sshd@22-88.198.162.73:22-139.178.68.195:44112.service - OpenSSH per-connection server daemon (139.178.68.195:44112). Apr 30 01:03:50.851074 containerd[1485]: time="2025-04-30T01:03:50.850622348Z" level=info msg="StartContainer for \"e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5\" returns successfully" Apr 30 01:03:50.855641 systemd[1]: cri-containerd-e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5.scope: Deactivated successfully. Apr 30 01:03:50.895254 containerd[1485]: time="2025-04-30T01:03:50.893254892Z" level=info msg="shim disconnected" id=e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5 namespace=k8s.io Apr 30 01:03:50.895254 containerd[1485]: time="2025-04-30T01:03:50.893345335Z" level=warning msg="cleaning up after shim disconnected" id=e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5 namespace=k8s.io Apr 30 01:03:50.895254 containerd[1485]: time="2025-04-30T01:03:50.893364736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:50.896798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e22f97b68a5d51d46d5cc94949ec725594dc2a848fd8306ad2b9197deeca3ef5-rootfs.mount: Deactivated successfully. Apr 30 01:03:51.756347 containerd[1485]: time="2025-04-30T01:03:51.756122765Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 01:03:51.781523 containerd[1485]: time="2025-04-30T01:03:51.781394193Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495\"" Apr 30 01:03:51.782291 containerd[1485]: time="2025-04-30T01:03:51.782251262Z" level=info msg="StartContainer for \"6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495\"" Apr 30 01:03:51.818506 systemd[1]: Started cri-containerd-6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495.scope - libcontainer container 6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495. Apr 30 01:03:51.838710 sshd[4643]: Accepted publickey for core from 139.178.68.195 port 44112 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:51.839956 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:51.847462 systemd-logind[1460]: New session 23 of user core. Apr 30 01:03:51.853494 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 01:03:51.857760 systemd[1]: cri-containerd-6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495.scope: Deactivated successfully. Apr 30 01:03:51.867352 containerd[1485]: time="2025-04-30T01:03:51.867252342Z" level=info msg="StartContainer for \"6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495\" returns successfully" Apr 30 01:03:51.869819 containerd[1485]: time="2025-04-30T01:03:51.868467463Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod012c9c21_1c9c_47de_876e_fde3cc962e9f.slice/cri-containerd-6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495.scope/memory.events\": no such file or directory" Apr 30 01:03:51.896985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495-rootfs.mount: Deactivated successfully. Apr 30 01:03:51.904073 containerd[1485]: time="2025-04-30T01:03:51.904020764Z" level=info msg="shim disconnected" id=6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495 namespace=k8s.io Apr 30 01:03:51.904446 containerd[1485]: time="2025-04-30T01:03:51.904292414Z" level=warning msg="cleaning up after shim disconnected" id=6c3f4f907122a6d8f8b68209882e87d25316e1d985c535cc812c98e87d42d495 namespace=k8s.io Apr 30 01:03:51.904446 containerd[1485]: time="2025-04-30T01:03:51.904308294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:03:51.937336 kubelet[2679]: E0430 01:03:51.937255 2679 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 01:03:52.762545 containerd[1485]: time="2025-04-30T01:03:52.762406741Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 01:03:52.777852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450586504.mount: Deactivated successfully. Apr 30 01:03:52.780232 containerd[1485]: time="2025-04-30T01:03:52.780157511Z" level=info msg="CreateContainer within sandbox \"306129a3e25fef69ec8000580dc3996c7c1e709237b8b9e38c2842f5ce29f570\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c13446cc651edb28b8262445106a1c677e38369494dd90ca23b3a32bdc7e219d\"" Apr 30 01:03:52.783219 containerd[1485]: time="2025-04-30T01:03:52.781291350Z" level=info msg="StartContainer for \"c13446cc651edb28b8262445106a1c677e38369494dd90ca23b3a32bdc7e219d\"" Apr 30 01:03:52.818533 systemd[1]: Started cri-containerd-c13446cc651edb28b8262445106a1c677e38369494dd90ca23b3a32bdc7e219d.scope - libcontainer container c13446cc651edb28b8262445106a1c677e38369494dd90ca23b3a32bdc7e219d. Apr 30 01:03:52.865173 containerd[1485]: time="2025-04-30T01:03:52.865120151Z" level=info msg="StartContainer for \"c13446cc651edb28b8262445106a1c677e38369494dd90ca23b3a32bdc7e219d\" returns successfully" Apr 30 01:03:53.184291 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 01:03:53.782494 kubelet[2679]: I0430 01:03:53.782429 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2cghc" podStartSLOduration=5.782401489 podStartE2EDuration="5.782401489s" podCreationTimestamp="2025-04-30 01:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:03:53.780783593 +0000 UTC m=+342.147743768" watchObservedRunningTime="2025-04-30 01:03:53.782401489 +0000 UTC m=+342.149361664" Apr 30 01:03:56.091459 systemd-networkd[1381]: lxc_health: Link UP Apr 30 01:03:56.102291 systemd-networkd[1381]: lxc_health: Gained carrier Apr 30 01:03:57.564451 systemd-networkd[1381]: lxc_health: Gained IPv6LL Apr 30 01:04:03.409640 sshd[4643]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:03.414638 systemd[1]: sshd@22-88.198.162.73:22-139.178.68.195:44112.service: Deactivated successfully. Apr 30 01:04:03.417862 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 01:04:03.419799 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Apr 30 01:04:03.421134 systemd-logind[1460]: Removed session 23. Apr 30 01:04:11.796715 containerd[1485]: time="2025-04-30T01:04:11.796616269Z" level=info msg="StopPodSandbox for \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\"" Apr 30 01:04:11.797167 containerd[1485]: time="2025-04-30T01:04:11.796902479Z" level=info msg="TearDown network for sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" successfully" Apr 30 01:04:11.797167 containerd[1485]: time="2025-04-30T01:04:11.796933240Z" level=info msg="StopPodSandbox for \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" returns successfully" Apr 30 01:04:11.799194 containerd[1485]: time="2025-04-30T01:04:11.797691947Z" level=info msg="RemovePodSandbox for \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\"" Apr 30 01:04:11.799194 containerd[1485]: time="2025-04-30T01:04:11.797730908Z" level=info msg="Forcibly stopping sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\"" Apr 30 01:04:11.799194 containerd[1485]: time="2025-04-30T01:04:11.797793950Z" level=info msg="TearDown network for sandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" successfully" Apr 30 01:04:11.801241 containerd[1485]: time="2025-04-30T01:04:11.801200868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:04:11.801435 containerd[1485]: time="2025-04-30T01:04:11.801414156Z" level=info msg="RemovePodSandbox \"2a796d2b27f4bfa84059d0be769f3a9c3add4448dd997fce4558e1e7a3635e35\" returns successfully" Apr 30 01:04:11.801902 containerd[1485]: time="2025-04-30T01:04:11.801873252Z" level=info msg="StopPodSandbox for \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\"" Apr 30 01:04:11.801974 containerd[1485]: time="2025-04-30T01:04:11.801953974Z" level=info msg="TearDown network for sandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" successfully" Apr 30 01:04:11.801974 containerd[1485]: time="2025-04-30T01:04:11.801964775Z" level=info msg="StopPodSandbox for \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" returns successfully" Apr 30 01:04:11.803467 containerd[1485]: time="2025-04-30T01:04:11.802364269Z" level=info msg="RemovePodSandbox for \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\"" Apr 30 01:04:11.803467 containerd[1485]: time="2025-04-30T01:04:11.802390790Z" level=info msg="Forcibly stopping sandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\"" Apr 30 01:04:11.803467 containerd[1485]: time="2025-04-30T01:04:11.802438351Z" level=info msg="TearDown network for sandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" successfully" Apr 30 01:04:11.806070 containerd[1485]: time="2025-04-30T01:04:11.805949393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:04:11.806070 containerd[1485]: time="2025-04-30T01:04:11.806004595Z" level=info msg="RemovePodSandbox \"d12a285659da6f0f58b379fc3c05780c7675092331135c38ce0eb0ddf09229ad\" returns successfully"