Feb 13 15:35:29.882834 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:35:29.882859 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:35:29.882869 kernel: KASLR enabled Feb 13 15:35:29.882875 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:35:29.882881 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Feb 13 15:35:29.882887 kernel: random: crng init done Feb 13 15:35:29.882893 kernel: secureboot: Secure boot disabled Feb 13 15:35:29.882899 kernel: ACPI: Early table checksum verification disabled Feb 13 15:35:29.882905 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:35:29.882913 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:35:29.882920 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882926 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882931 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882937 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882945 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882953 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882959 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882965 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882971 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:29.882978 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:35:29.882984 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:35:29.882990 kernel: NUMA: Failed to initialise from firmware Feb 13 15:35:29.882997 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:35:29.883003 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:35:29.883009 kernel: Zone ranges: Feb 13 15:35:29.883016 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:35:29.883023 kernel: DMA32 empty Feb 13 15:35:29.883029 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:35:29.883035 kernel: Movable zone start for each node Feb 13 15:35:29.883041 kernel: Early memory node ranges Feb 13 15:35:29.883047 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 15:35:29.883054 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:35:29.883060 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:35:29.883066 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:35:29.883072 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:35:29.883078 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:35:29.883085 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:35:29.883092 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:35:29.883099 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:35:29.883105 kernel: psci: probing for conduit method from ACPI. Feb 13 15:35:29.883114 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:35:29.883121 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:35:29.883128 kernel: psci: Trusted OS migration not required Feb 13 15:35:29.885312 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:35:29.885342 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:35:29.885350 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:35:29.885359 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:35:29.885366 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:35:29.885374 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:35:29.885381 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:35:29.885388 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:35:29.885396 kernel: CPU features: detected: Spectre-v4 Feb 13 15:35:29.885403 kernel: CPU features: detected: Spectre-BHB Feb 13 15:35:29.885419 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:35:29.885426 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:35:29.885432 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:35:29.885439 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:35:29.885446 kernel: alternatives: applying boot alternatives Feb 13 15:35:29.885454 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:35:29.885461 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:35:29.885468 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:35:29.885475 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:35:29.885482 kernel: Fallback order for Node 0: 0 Feb 13 15:35:29.885489 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:35:29.885497 kernel: Policy zone: Normal Feb 13 15:35:29.885504 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:35:29.885510 kernel: software IO TLB: area num 2. Feb 13 15:35:29.885517 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:35:29.885525 kernel: Memory: 3882680K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 213320K reserved, 0K cma-reserved) Feb 13 15:35:29.885532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:35:29.885538 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:35:29.885546 kernel: rcu: RCU event tracing is enabled. Feb 13 15:35:29.885553 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:35:29.885560 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:35:29.885567 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:35:29.885573 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:35:29.885582 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:35:29.885589 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:35:29.885596 kernel: GICv3: 256 SPIs implemented Feb 13 15:35:29.885603 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:35:29.885609 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:35:29.885616 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:35:29.885623 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:35:29.885629 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:35:29.885638 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:35:29.885647 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:35:29.885655 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:35:29.885665 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:35:29.885672 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:35:29.885679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:29.885686 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:35:29.885692 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:35:29.885699 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:35:29.885706 kernel: Console: colour dummy device 80x25 Feb 13 15:35:29.885713 kernel: ACPI: Core revision 20230628 Feb 13 15:35:29.885721 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:35:29.885728 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:35:29.885736 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:35:29.885743 kernel: landlock: Up and running. Feb 13 15:35:29.885751 kernel: SELinux: Initializing. Feb 13 15:35:29.885758 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:35:29.885765 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:35:29.885772 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:35:29.885780 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:35:29.885787 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:35:29.885794 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:35:29.885802 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:35:29.885809 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:35:29.885816 kernel: Remapping and enabling EFI services. Feb 13 15:35:29.885823 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:35:29.885831 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:35:29.885840 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:35:29.885848 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:35:29.885855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:29.885862 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:35:29.885869 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:35:29.885877 kernel: SMP: Total of 2 processors activated. Feb 13 15:35:29.885884 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:35:29.885897 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:35:29.885906 kernel: CPU features: detected: Common not Private translations Feb 13 15:35:29.885913 kernel: CPU features: detected: CRC32 instructions Feb 13 15:35:29.885921 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:35:29.885928 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:35:29.885938 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:35:29.885945 kernel: CPU features: detected: Privileged Access Never Feb 13 15:35:29.885954 kernel: CPU features: detected: RAS Extension Support Feb 13 15:35:29.885961 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:35:29.885969 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:35:29.885976 kernel: alternatives: applying system-wide alternatives Feb 13 15:35:29.885983 kernel: devtmpfs: initialized Feb 13 15:35:29.885991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:35:29.885999 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:35:29.886008 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:35:29.886015 kernel: SMBIOS 3.0.0 present. Feb 13 15:35:29.886022 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:35:29.886030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:35:29.886037 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:35:29.886045 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:35:29.886055 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:35:29.886062 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:35:29.886070 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Feb 13 15:35:29.886079 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:35:29.886086 kernel: cpuidle: using governor menu Feb 13 15:35:29.886093 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:35:29.886101 kernel: ASID allocator initialised with 32768 entries Feb 13 15:35:29.886108 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:35:29.886116 kernel: Serial: AMBA PL011 UART driver Feb 13 15:35:29.886124 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:35:29.886131 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:35:29.886152 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:35:29.886161 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:35:29.886169 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:35:29.886176 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:35:29.886184 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:35:29.886191 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:35:29.886201 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:35:29.886209 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:35:29.886216 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:35:29.886223 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:35:29.886231 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:35:29.886240 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:35:29.886247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:35:29.886256 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:35:29.886263 kernel: ACPI: Interpreter enabled Feb 13 15:35:29.886271 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:35:29.886278 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:35:29.886285 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:35:29.886293 kernel: printk: console [ttyAMA0] enabled Feb 13 15:35:29.886300 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:35:29.886483 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:35:29.886562 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:35:29.886630 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:35:29.886691 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:35:29.886759 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:35:29.886768 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:35:29.886776 kernel: PCI host bridge to bus 0000:00 Feb 13 15:35:29.886855 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:35:29.886918 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:35:29.886977 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:35:29.887037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:35:29.887118 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:35:29.889306 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:35:29.889467 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:35:29.889539 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:35:29.889624 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.889693 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:35:29.889772 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.889838 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:35:29.889924 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.890001 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:35:29.890091 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.891259 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:35:29.891402 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.891485 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:35:29.891582 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.891663 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:35:29.891751 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.891830 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:35:29.891900 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.891972 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:35:29.892045 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:35:29.892122 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:35:29.893361 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:35:29.893461 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:35:29.893555 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:35:29.893639 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:35:29.893731 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:35:29.893820 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:35:29.893917 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:35:29.894002 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:35:29.894094 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:35:29.895571 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:35:29.895663 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:35:29.895746 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:35:29.895832 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:35:29.895907 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:35:29.895980 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:35:29.896049 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:35:29.896164 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:35:29.896255 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:35:29.896350 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:35:29.896436 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:35:29.896507 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:35:29.896580 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:35:29.896652 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:35:29.896725 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:35:29.896801 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:35:29.896868 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:35:29.896943 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:35:29.897012 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:35:29.897093 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:35:29.898240 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:35:29.898357 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:35:29.898435 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:35:29.898511 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:35:29.898577 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:35:29.898643 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:35:29.898715 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:35:29.898779 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:35:29.898850 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:35:29.898920 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:35:29.898994 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:35:29.899058 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:35:29.899279 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:35:29.899426 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:35:29.899499 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:35:29.899576 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:35:29.899646 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:35:29.899720 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:35:29.899790 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:35:29.899864 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:35:29.899928 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:35:29.900003 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:35:29.900069 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:35:29.901026 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:35:29.901247 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:35:29.901382 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:35:29.901465 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:35:29.901533 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:35:29.901609 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:35:29.901677 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:35:29.901749 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:35:29.901824 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:35:29.901888 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:35:29.901963 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:35:29.902031 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:35:29.902101 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:35:29.902189 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:35:29.902308 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:35:29.902406 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:35:29.902480 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:35:29.902552 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:35:29.902618 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:35:29.902689 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:35:29.902755 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:35:29.902826 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:35:29.902892 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:35:29.902964 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:35:29.903032 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:35:29.903099 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:35:29.903181 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:35:29.903251 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:35:29.903357 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:35:29.903434 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:35:29.903509 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:35:29.903580 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:35:29.903654 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:35:29.903736 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:35:29.903812 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:35:29.903876 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:35:29.903952 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:35:29.904028 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:35:29.904101 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:35:29.904227 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:35:29.904303 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:35:29.904389 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:35:29.904462 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:35:29.904526 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:35:29.904603 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:35:29.904675 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:35:29.904747 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:35:29.904811 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:35:29.904881 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:35:29.904954 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:35:29.905030 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:35:29.905101 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:35:29.905214 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:35:29.905289 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:35:29.905372 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:35:29.905448 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:35:29.905521 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:35:29.905586 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:35:29.905659 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:35:29.905729 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:35:29.905802 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:35:29.905874 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:35:29.905942 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:35:29.906009 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:35:29.906077 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:35:29.906222 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:35:29.906306 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:35:29.906445 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:35:29.906517 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:35:29.906582 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:35:29.906649 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:35:29.906733 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:35:29.906828 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:35:29.906895 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:35:29.906963 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:35:29.907042 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:35:29.907105 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:35:29.907235 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:35:29.907306 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:35:29.907386 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:35:29.907457 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:35:29.907520 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:35:29.907592 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:35:29.907656 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:35:29.907724 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:35:29.907788 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:35:29.907852 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:35:29.907922 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:35:29.907979 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:35:29.908043 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:35:29.908122 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:35:29.908241 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:35:29.908309 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:35:29.908416 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:35:29.908479 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:35:29.908545 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:35:29.908612 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:35:29.908687 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:35:29.908758 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:35:29.908832 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:35:29.908892 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:35:29.908959 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:35:29.909025 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:35:29.909095 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:35:29.909206 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:35:29.909279 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:35:29.909371 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:35:29.909438 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:35:29.909511 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:35:29.909573 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:35:29.909637 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:35:29.909707 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:35:29.909768 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:35:29.909834 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:35:29.909904 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:35:29.909972 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:35:29.910034 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:35:29.910044 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:35:29.910052 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:35:29.910063 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:35:29.910073 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:35:29.910081 kernel: iommu: Default domain type: Translated Feb 13 15:35:29.910092 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:35:29.910100 kernel: efivars: Registered efivars operations Feb 13 15:35:29.910108 kernel: vgaarb: loaded Feb 13 15:35:29.910116 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:35:29.910123 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:35:29.910156 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:35:29.910165 kernel: pnp: PnP ACPI init Feb 13 15:35:29.910259 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:35:29.910274 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:35:29.910282 kernel: NET: Registered PF_INET protocol family Feb 13 15:35:29.910290 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:35:29.910298 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:35:29.910306 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:35:29.910314 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:35:29.910355 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:35:29.910363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:35:29.910372 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:35:29.910387 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:35:29.910396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:35:29.910482 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:35:29.910495 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:35:29.910503 kernel: kvm [1]: HYP mode not available Feb 13 15:35:29.910511 kernel: Initialise system trusted keyrings Feb 13 15:35:29.910522 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:35:29.910531 kernel: Key type asymmetric registered Feb 13 15:35:29.910539 kernel: Asymmetric key parser 'x509' registered Feb 13 15:35:29.910549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:35:29.910557 kernel: io scheduler mq-deadline registered Feb 13 15:35:29.910565 kernel: io scheduler kyber registered Feb 13 15:35:29.910573 kernel: io scheduler bfq registered Feb 13 15:35:29.910581 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:35:29.910649 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:35:29.910723 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:35:29.910788 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.910867 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:35:29.910934 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:35:29.911000 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.911075 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:35:29.911162 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:35:29.911235 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.911313 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:35:29.911413 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:35:29.911480 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.911554 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:35:29.911620 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:35:29.911691 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.911763 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:35:29.911834 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:35:29.911906 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.911979 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:35:29.912055 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:35:29.912127 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.912246 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:35:29.912372 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:35:29.912452 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.912463 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:35:29.912535 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:35:29.912603 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:35:29.912676 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:35:29.912689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:35:29.912697 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:35:29.912706 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:35:29.912776 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:35:29.912856 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:35:29.912868 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:35:29.912876 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:35:29.912946 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:35:29.912957 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:35:29.912965 kernel: thunder_xcv, ver 1.0 Feb 13 15:35:29.912977 kernel: thunder_bgx, ver 1.0 Feb 13 15:35:29.912987 kernel: nicpf, ver 1.0 Feb 13 15:35:29.912995 kernel: nicvf, ver 1.0 Feb 13 15:35:29.913076 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:35:29.913179 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:35:29 UTC (1739460929) Feb 13 15:35:29.913194 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:35:29.913202 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:35:29.913210 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:35:29.913219 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:35:29.913229 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:35:29.913238 kernel: Segment Routing with IPv6 Feb 13 15:35:29.913246 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:35:29.913254 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:35:29.913262 kernel: Key type dns_resolver registered Feb 13 15:35:29.913269 kernel: registered taskstats version 1 Feb 13 15:35:29.913279 kernel: Loading compiled-in X.509 certificates Feb 13 15:35:29.913287 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:35:29.913294 kernel: Key type .fscrypt registered Feb 13 15:35:29.913302 kernel: Key type fscrypt-provisioning registered Feb 13 15:35:29.913309 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:35:29.913328 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:35:29.913336 kernel: ima: No architecture policies found Feb 13 15:35:29.913344 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:35:29.913354 kernel: clk: Disabling unused clocks Feb 13 15:35:29.913362 kernel: Freeing unused kernel memory: 39680K Feb 13 15:35:29.913370 kernel: Run /init as init process Feb 13 15:35:29.913377 kernel: with arguments: Feb 13 15:35:29.913385 kernel: /init Feb 13 15:35:29.913392 kernel: with environment: Feb 13 15:35:29.913401 kernel: HOME=/ Feb 13 15:35:29.913411 kernel: TERM=linux Feb 13 15:35:29.913420 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:35:29.913429 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:35:29.913441 systemd[1]: Detected virtualization kvm. Feb 13 15:35:29.913449 systemd[1]: Detected architecture arm64. Feb 13 15:35:29.913457 systemd[1]: Running in initrd. Feb 13 15:35:29.913465 systemd[1]: No hostname configured, using default hostname. Feb 13 15:35:29.913473 systemd[1]: Hostname set to . Feb 13 15:35:29.913481 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:35:29.913491 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:35:29.913499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:29.913508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:29.913516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:35:29.913524 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:35:29.913533 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:35:29.913544 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:35:29.913556 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:35:29.913567 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:35:29.913576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:29.913585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:29.913593 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:35:29.913601 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:35:29.913609 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:35:29.913618 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:35:29.913626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:35:29.913636 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:35:29.913645 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:35:29.913653 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:35:29.913661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:29.913669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:29.913678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:29.913690 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:35:29.913700 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:35:29.913712 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:35:29.913720 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:35:29.913729 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:35:29.913737 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:35:29.913746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:35:29.913754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:29.913788 systemd-journald[236]: Collecting audit messages is disabled. Feb 13 15:35:29.913812 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:35:29.913820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:29.913828 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:35:29.913842 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:35:29.913851 kernel: Bridge firewalling registered Feb 13 15:35:29.913860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:35:29.913868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:29.913877 systemd-journald[236]: Journal started Feb 13 15:35:29.913898 systemd-journald[236]: Runtime Journal (/run/log/journal/a884aaf25cb94d298b624b69100b9cc4) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:35:29.919370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:29.876113 systemd-modules-load[237]: Inserted module 'overlay' Feb 13 15:35:29.901506 systemd-modules-load[237]: Inserted module 'br_netfilter' Feb 13 15:35:29.924305 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:35:29.935598 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:29.940374 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:29.941870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:35:29.944183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:35:29.958051 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:35:29.971529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:29.973222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:29.974622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:29.975533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:29.982451 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:35:29.987474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:35:29.999188 dracut-cmdline[271]: dracut-dracut-053 Feb 13 15:35:30.002086 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:35:30.024247 systemd-resolved[272]: Positive Trust Anchors: Feb 13 15:35:30.024388 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:35:30.024422 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:35:30.035277 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 13 15:35:30.036401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:35:30.037067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:30.115188 kernel: SCSI subsystem initialized Feb 13 15:35:30.120175 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:35:30.127175 kernel: iscsi: registered transport (tcp) Feb 13 15:35:30.141177 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:35:30.141243 kernel: QLogic iSCSI HBA Driver Feb 13 15:35:30.190335 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:35:30.197434 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:35:30.218669 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:35:30.218768 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:35:30.218800 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:35:30.267210 kernel: raid6: neonx8 gen() 15669 MB/s Feb 13 15:35:30.284212 kernel: raid6: neonx4 gen() 15556 MB/s Feb 13 15:35:30.301191 kernel: raid6: neonx2 gen() 13154 MB/s Feb 13 15:35:30.318220 kernel: raid6: neonx1 gen() 10435 MB/s Feb 13 15:35:30.335222 kernel: raid6: int64x8 gen() 6919 MB/s Feb 13 15:35:30.352188 kernel: raid6: int64x4 gen() 7319 MB/s Feb 13 15:35:30.369298 kernel: raid6: int64x2 gen() 6105 MB/s Feb 13 15:35:30.386194 kernel: raid6: int64x1 gen() 5022 MB/s Feb 13 15:35:30.386272 kernel: raid6: using algorithm neonx8 gen() 15669 MB/s Feb 13 15:35:30.403226 kernel: raid6: .... xor() 11846 MB/s, rmw enabled Feb 13 15:35:30.403304 kernel: raid6: using neon recovery algorithm Feb 13 15:35:30.408480 kernel: xor: measuring software checksum speed Feb 13 15:35:30.408547 kernel: 8regs : 19783 MB/sec Feb 13 15:35:30.408559 kernel: 32regs : 19641 MB/sec Feb 13 15:35:30.409186 kernel: arm64_neon : 27061 MB/sec Feb 13 15:35:30.409213 kernel: xor: using function: arm64_neon (27061 MB/sec) Feb 13 15:35:30.461201 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:35:30.476556 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:35:30.484538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:30.509745 systemd-udevd[455]: Using default interface naming scheme 'v255'. Feb 13 15:35:30.514083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:30.526418 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:35:30.540978 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 15:35:30.578391 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:35:30.586417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:35:30.637446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:30.645741 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:35:30.661827 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:35:30.663670 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:35:30.664510 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:30.666867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:35:30.673374 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:35:30.699756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:35:30.741214 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:35:30.746120 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:35:30.746178 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:35:30.779668 kernel: ACPI: bus type USB registered Feb 13 15:35:30.779729 kernel: usbcore: registered new interface driver usbfs Feb 13 15:35:30.780696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:35:30.780841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:30.787160 kernel: usbcore: registered new interface driver hub Feb 13 15:35:30.785238 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:30.791381 kernel: usbcore: registered new device driver usb Feb 13 15:35:30.786225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:35:30.786427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:30.787096 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:30.794467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:30.799321 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:35:30.807925 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:35:30.808045 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:35:30.808056 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:35:30.820302 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:35:30.831394 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:35:30.831533 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:35:30.831630 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:35:30.831749 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:35:30.831831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:35:30.831841 kernel: GPT:17805311 != 80003071 Feb 13 15:35:30.831851 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:35:30.831865 kernel: GPT:17805311 != 80003071 Feb 13 15:35:30.831874 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:35:30.831883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:35:30.831893 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:35:30.827071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:30.835302 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:35:30.841440 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:35:30.841556 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:35:30.841642 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:35:30.841721 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:35:30.841799 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:35:30.841876 kernel: hub 1-0:1.0: USB hub found Feb 13 15:35:30.841986 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:35:30.842067 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:35:30.842182 kernel: hub 2-0:1.0: USB hub found Feb 13 15:35:30.842271 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:35:30.833623 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:30.867243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:30.894227 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (504) Feb 13 15:35:30.896197 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (520) Feb 13 15:35:30.901439 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:35:30.918379 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:35:30.924685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:35:30.931002 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:35:30.932084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:35:30.942404 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:35:30.953435 disk-uuid[574]: Primary Header is updated. Feb 13 15:35:30.953435 disk-uuid[574]: Secondary Entries is updated. Feb 13 15:35:30.953435 disk-uuid[574]: Secondary Header is updated. Feb 13 15:35:30.962296 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:35:31.081260 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:35:31.324175 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:35:31.463766 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:35:31.463826 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:35:31.466215 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:35:31.519835 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:35:31.520599 kernel: usbcore: registered new interface driver usbhid Feb 13 15:35:31.520623 kernel: usbhid: USB HID core driver Feb 13 15:35:31.975459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:35:31.976277 disk-uuid[576]: The operation has completed successfully. Feb 13 15:35:32.024320 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:35:32.024420 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:35:32.044360 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:35:32.058022 sh[591]: Success Feb 13 15:35:32.071270 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:35:32.130319 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:35:32.139625 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:35:32.142406 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:35:32.177618 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:35:32.177683 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:32.178279 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:35:32.179149 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:35:32.179350 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:35:32.189197 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:35:32.191797 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:35:32.193695 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:35:32.200521 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:35:32.205490 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:35:32.217456 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:32.217514 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:32.217526 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:35:32.221208 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:35:32.221276 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:35:32.233169 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:32.233542 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:35:32.243267 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:35:32.252573 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:35:32.345437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:35:32.354467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:35:32.358370 ignition[683]: Ignition 2.20.0 Feb 13 15:35:32.358387 ignition[683]: Stage: fetch-offline Feb 13 15:35:32.358425 ignition[683]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:32.358464 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:32.358630 ignition[683]: parsed url from cmdline: "" Feb 13 15:35:32.361240 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:35:32.358633 ignition[683]: no config URL provided Feb 13 15:35:32.358638 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:35:32.358645 ignition[683]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:35:32.358650 ignition[683]: failed to fetch config: resource requires networking Feb 13 15:35:32.358816 ignition[683]: Ignition finished successfully Feb 13 15:35:32.383397 systemd-networkd[777]: lo: Link UP Feb 13 15:35:32.383413 systemd-networkd[777]: lo: Gained carrier Feb 13 15:35:32.385209 systemd-networkd[777]: Enumeration completed Feb 13 15:35:32.385396 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:35:32.385957 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:32.385960 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:32.386100 systemd[1]: Reached target network.target - Network. Feb 13 15:35:32.387469 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:32.387472 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:32.388601 systemd-networkd[777]: eth0: Link UP Feb 13 15:35:32.388604 systemd-networkd[777]: eth0: Gained carrier Feb 13 15:35:32.388612 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:32.395588 systemd-networkd[777]: eth1: Link UP Feb 13 15:35:32.395591 systemd-networkd[777]: eth1: Gained carrier Feb 13 15:35:32.395603 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:32.396461 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:35:32.411265 ignition[780]: Ignition 2.20.0 Feb 13 15:35:32.411276 ignition[780]: Stage: fetch Feb 13 15:35:32.411490 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:32.411502 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:32.411595 ignition[780]: parsed url from cmdline: "" Feb 13 15:35:32.411600 ignition[780]: no config URL provided Feb 13 15:35:32.411605 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:35:32.411614 ignition[780]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:35:32.411803 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:35:32.412484 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:35:32.436279 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:35:32.454285 systemd-networkd[777]: eth0: DHCPv4 address 78.46.147.231/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:35:32.613203 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:35:32.619212 ignition[780]: GET result: OK Feb 13 15:35:32.619581 ignition[780]: parsing config with SHA512: 9b2efcfc083d2d32ab47b4d93cf5c59acd8ecdb10067e53e32194655708c035d65eee12681359865ca5d67bfee745b29e983fe3422d53435c86be3f629057e2c Feb 13 15:35:32.631725 unknown[780]: fetched base config from "system" Feb 13 15:35:32.631741 unknown[780]: fetched base config from "system" Feb 13 15:35:32.632479 ignition[780]: fetch: fetch complete Feb 13 15:35:32.631750 unknown[780]: fetched user config from "hetzner" Feb 13 15:35:32.632485 ignition[780]: fetch: fetch passed Feb 13 15:35:32.634576 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:35:32.632546 ignition[780]: Ignition finished successfully Feb 13 15:35:32.642464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:35:32.655630 ignition[787]: Ignition 2.20.0 Feb 13 15:35:32.655642 ignition[787]: Stage: kargs Feb 13 15:35:32.655830 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:32.655840 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:32.656921 ignition[787]: kargs: kargs passed Feb 13 15:35:32.658361 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:35:32.656980 ignition[787]: Ignition finished successfully Feb 13 15:35:32.667533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:35:32.679828 ignition[794]: Ignition 2.20.0 Feb 13 15:35:32.679840 ignition[794]: Stage: disks Feb 13 15:35:32.680053 ignition[794]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:32.680064 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:32.681066 ignition[794]: disks: disks passed Feb 13 15:35:32.681121 ignition[794]: Ignition finished successfully Feb 13 15:35:32.684205 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:35:32.686381 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:35:32.688629 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:35:32.689320 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:35:32.690440 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:35:32.691598 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:35:32.708716 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:35:32.735662 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:35:32.740895 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:35:32.746258 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:35:32.811710 kernel: EXT4-fs (sda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:35:32.812640 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:35:32.814284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:35:32.825369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:35:32.830044 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:35:32.833392 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:35:32.836914 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:35:32.836959 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:35:32.843074 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:35:32.846204 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Feb 13 15:35:32.849605 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:32.849675 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:32.849689 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:35:32.853825 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:35:32.858156 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:35:32.858210 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:35:32.860910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:35:32.901437 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:35:32.902870 coreos-metadata[812]: Feb 13 15:35:32.902 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:35:32.905096 coreos-metadata[812]: Feb 13 15:35:32.905 INFO Fetch successful Feb 13 15:35:32.905096 coreos-metadata[812]: Feb 13 15:35:32.905 INFO wrote hostname ci-4152-2-1-1-287b7b51cc to /sysroot/etc/hostname Feb 13 15:35:32.907899 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:35:32.912708 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:35:32.918816 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:35:32.923587 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:35:33.034106 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:35:33.038253 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:35:33.040405 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:35:33.051206 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:33.075740 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:35:33.084521 ignition[928]: INFO : Ignition 2.20.0 Feb 13 15:35:33.085251 ignition[928]: INFO : Stage: mount Feb 13 15:35:33.085684 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:33.085684 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:33.087562 ignition[928]: INFO : mount: mount passed Feb 13 15:35:33.087562 ignition[928]: INFO : Ignition finished successfully Feb 13 15:35:33.088337 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:35:33.093306 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:35:33.178490 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:35:33.191554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:35:33.204176 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Feb 13 15:35:33.205667 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:33.205736 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:33.205749 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:35:33.209170 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:35:33.209246 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:35:33.212971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:35:33.243182 ignition[958]: INFO : Ignition 2.20.0 Feb 13 15:35:33.243182 ignition[958]: INFO : Stage: files Feb 13 15:35:33.244315 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:33.244315 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:33.246189 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:35:33.246189 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:35:33.246189 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:35:33.250048 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:35:33.250976 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:35:33.252317 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:35:33.251391 unknown[958]: wrote ssh authorized keys file for user: core Feb 13 15:35:33.254151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:35:33.254151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:35:33.322951 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:35:33.665941 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:35:33.668637 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:35:33.668637 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:35:34.118696 systemd-networkd[777]: eth1: Gained IPv6LL Feb 13 15:35:34.240194 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:35:34.324988 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:34.328539 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:35:34.374464 systemd-networkd[777]: eth0: Gained IPv6LL Feb 13 15:35:34.743029 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:35:35.043859 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:35:35.043859 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:35:35.046761 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:35:35.047930 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:35:35.047930 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:35:35.047930 ignition[958]: INFO : files: files passed Feb 13 15:35:35.058474 ignition[958]: INFO : Ignition finished successfully Feb 13 15:35:35.049794 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:35:35.058392 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:35:35.061951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:35:35.066792 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:35:35.066971 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:35:35.086253 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:35.086253 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:35.090100 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:35.090385 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:35:35.092013 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:35:35.099416 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:35:35.142990 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:35:35.143236 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:35:35.145814 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:35:35.146759 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:35:35.147788 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:35:35.154498 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:35:35.172556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:35:35.179386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:35:35.194396 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:35.196305 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:35.197883 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:35:35.198474 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:35:35.198609 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:35:35.201231 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:35:35.201989 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:35:35.203662 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:35:35.204795 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:35:35.206684 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:35:35.208500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:35:35.210002 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:35:35.211786 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:35:35.213230 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:35:35.214411 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:35:35.215324 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:35:35.215467 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:35:35.216799 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:35.217535 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:35.218655 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:35:35.219073 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:35.219841 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:35:35.219971 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:35:35.221575 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:35:35.221710 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:35:35.222769 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:35:35.222864 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:35:35.223984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:35:35.224086 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:35:35.231495 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:35:35.232037 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:35:35.232209 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:35.236379 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:35:35.236914 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:35:35.237064 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:35.241383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:35:35.241733 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:35:35.253307 ignition[1010]: INFO : Ignition 2.20.0 Feb 13 15:35:35.253307 ignition[1010]: INFO : Stage: umount Feb 13 15:35:35.253307 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:35.253307 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:35:35.253307 ignition[1010]: INFO : umount: umount passed Feb 13 15:35:35.253307 ignition[1010]: INFO : Ignition finished successfully Feb 13 15:35:35.257622 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:35:35.257733 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:35:35.259075 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:35:35.260749 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:35:35.263014 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:35:35.263117 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:35:35.264901 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:35:35.264961 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:35:35.267811 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:35:35.267872 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:35:35.268716 systemd[1]: Stopped target network.target - Network. Feb 13 15:35:35.270461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:35:35.270542 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:35:35.271715 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:35:35.273764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:35:35.274392 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:35.275066 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:35:35.276750 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:35:35.278659 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:35:35.278709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:35:35.279740 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:35:35.279775 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:35:35.280401 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:35:35.280468 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:35:35.283100 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:35:35.283180 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:35:35.285081 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:35:35.287234 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:35:35.289202 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:35:35.289736 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:35:35.289827 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:35:35.292250 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:35:35.292365 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:35:35.296912 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:35:35.297082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:35:35.297755 systemd-networkd[777]: eth1: DHCPv6 lease lost Feb 13 15:35:35.299436 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:35:35.299508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:35.301696 systemd-networkd[777]: eth0: DHCPv6 lease lost Feb 13 15:35:35.303968 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:35:35.304116 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:35:35.305831 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:35:35.305865 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:35.313465 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:35:35.314059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:35:35.314129 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:35:35.317106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:35:35.317194 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:35.318443 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:35:35.318502 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:35.320047 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:35.334246 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:35:35.334455 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:35:35.339015 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:35:35.339242 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:35.341247 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:35:35.341338 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:35.343045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:35:35.343091 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:35.344175 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:35:35.344233 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:35:35.345738 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:35:35.345788 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:35:35.347285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:35:35.347339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:35.353501 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:35:35.354091 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:35:35.354184 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:35.356763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:35:35.356825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:35.370162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:35:35.370322 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:35:35.372203 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:35:35.382421 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:35:35.390992 systemd[1]: Switching root. Feb 13 15:35:35.425382 systemd-journald[236]: Journal stopped Feb 13 15:35:36.384084 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Feb 13 15:35:36.386792 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:35:36.386820 kernel: SELinux: policy capability open_perms=1 Feb 13 15:35:36.386830 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:35:36.386845 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:35:36.386856 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:35:36.386866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:35:36.386875 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:35:36.386890 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:35:36.386900 kernel: audit: type=1403 audit(1739460935.604:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:35:36.386911 systemd[1]: Successfully loaded SELinux policy in 38.152ms. Feb 13 15:35:36.386931 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.076ms. Feb 13 15:35:36.386943 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:35:36.386955 systemd[1]: Detected virtualization kvm. Feb 13 15:35:36.386966 systemd[1]: Detected architecture arm64. Feb 13 15:35:36.386979 systemd[1]: Detected first boot. Feb 13 15:35:36.386990 systemd[1]: Hostname set to . Feb 13 15:35:36.387004 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:35:36.387017 zram_generator::config[1052]: No configuration found. Feb 13 15:35:36.387030 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:35:36.387042 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:35:36.387052 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:35:36.387066 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:36.387077 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:35:36.387089 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:35:36.387103 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:35:36.387119 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:35:36.387391 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:35:36.387414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:35:36.387425 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:35:36.387435 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:35:36.387445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:36.387456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:36.387467 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:35:36.387485 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:35:36.387496 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:35:36.387506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:35:36.387517 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:35:36.387529 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:36.387539 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:35:36.387550 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:35:36.387561 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:35:36.387572 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:35:36.387582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:36.387593 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:35:36.387603 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:35:36.387613 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:35:36.387623 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:35:36.387633 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:35:36.387646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:36.387656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:36.387667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:36.387678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:35:36.387688 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:35:36.387698 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:35:36.387708 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:35:36.387718 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:35:36.387728 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:35:36.387740 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:35:36.387755 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:35:36.387767 systemd[1]: Reached target machines.target - Containers. Feb 13 15:35:36.387778 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:35:36.387788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:36.387800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:35:36.387811 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:35:36.387822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:36.387832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:35:36.387843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:36.387853 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:35:36.387863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:36.387875 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:35:36.387885 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:35:36.387897 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:35:36.387907 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:35:36.387917 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:35:36.387927 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:35:36.387938 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:35:36.387948 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:35:36.387959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:35:36.387969 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:35:36.387980 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:35:36.387991 systemd[1]: Stopped verity-setup.service. Feb 13 15:35:36.388001 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:35:36.388012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:35:36.388022 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:35:36.388032 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:35:36.388044 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:35:36.388055 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:35:36.388065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:36.388080 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:35:36.388090 kernel: loop: module loaded Feb 13 15:35:36.388100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:35:36.388110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:36.388120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:36.388166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:36.388181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:36.388192 kernel: fuse: init (API version 7.39) Feb 13 15:35:36.388202 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:36.388212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:36.388225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:36.388237 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:35:36.388247 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:35:36.388257 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:35:36.388313 systemd-journald[1119]: Collecting audit messages is disabled. Feb 13 15:35:36.388341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:35:36.388353 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:35:36.388365 systemd-journald[1119]: Journal started Feb 13 15:35:36.388386 systemd-journald[1119]: Runtime Journal (/run/log/journal/a884aaf25cb94d298b624b69100b9cc4) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:35:36.398376 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:35:36.398460 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:35:36.118569 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:35:36.141161 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:35:36.141934 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:35:36.402593 kernel: ACPI: bus type drm_connector registered Feb 13 15:35:36.402653 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:35:36.402674 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:35:36.411218 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:35:36.426708 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:35:36.426789 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:35:36.429210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:36.434160 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:35:36.439214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:36.446164 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:35:36.448181 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:36.454213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:36.461410 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:35:36.466345 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:35:36.469025 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:35:36.470632 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:35:36.471259 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:35:36.472963 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:35:36.474492 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:35:36.475672 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:35:36.477001 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:35:36.497718 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:35:36.504210 kernel: loop0: detected capacity change from 0 to 8 Feb 13 15:35:36.511753 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:35:36.515506 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:35:36.522181 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:35:36.525059 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:35:36.540214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:36.543850 systemd-journald[1119]: Time spent on flushing to /var/log/journal/a884aaf25cb94d298b624b69100b9cc4 is 80.082ms for 1136 entries. Feb 13 15:35:36.543850 systemd-journald[1119]: System Journal (/var/log/journal/a884aaf25cb94d298b624b69100b9cc4) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:35:36.647977 systemd-journald[1119]: Received client request to flush runtime journal. Feb 13 15:35:36.648045 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 15:35:36.648070 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 15:35:36.575214 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:35:36.575986 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:36.580221 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:35:36.591570 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:35:36.612190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:35:36.621510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:35:36.626526 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:35:36.653987 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:35:36.668101 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 15:35:36.669756 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Feb 13 15:35:36.670401 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Feb 13 15:35:36.678288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:36.711426 kernel: loop4: detected capacity change from 0 to 8 Feb 13 15:35:36.714275 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 15:35:36.724209 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 15:35:36.742166 kernel: loop7: detected capacity change from 0 to 116808 Feb 13 15:35:36.767674 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:35:36.769848 (sd-merge)[1193]: Merged extensions into '/usr'. Feb 13 15:35:36.776217 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:35:36.776835 systemd[1]: Reloading... Feb 13 15:35:36.899170 zram_generator::config[1222]: No configuration found. Feb 13 15:35:37.025930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:37.067165 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:35:37.086208 systemd[1]: Reloading finished in 308 ms. Feb 13 15:35:37.115304 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:35:37.116494 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:35:37.130422 systemd[1]: Starting ensure-sysext.service... Feb 13 15:35:37.134345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:35:37.157234 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:35:37.157286 systemd[1]: Reloading... Feb 13 15:35:37.197175 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:35:37.197536 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:35:37.198226 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:35:37.198459 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Feb 13 15:35:37.198508 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Feb 13 15:35:37.200981 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:35:37.201152 systemd-tmpfiles[1257]: Skipping /boot Feb 13 15:35:37.215226 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:35:37.215239 systemd-tmpfiles[1257]: Skipping /boot Feb 13 15:35:37.260343 zram_generator::config[1279]: No configuration found. Feb 13 15:35:37.368008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:37.413455 systemd[1]: Reloading finished in 255 ms. Feb 13 15:35:37.430604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:35:37.436627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:37.451549 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:35:37.458490 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:35:37.466329 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:35:37.469457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:35:37.474879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:37.479630 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:35:37.483941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:37.496530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:37.501498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:37.508489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:37.509459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:37.517700 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:35:37.518732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:37.520324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:37.526936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:37.531610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:37.533314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:37.538931 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:35:37.544631 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Feb 13 15:35:37.546594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:37.561616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:35:37.563463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:37.567580 systemd[1]: Finished ensure-sysext.service. Feb 13 15:35:37.580387 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:35:37.582960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:35:37.584352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:37.585504 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:37.588820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:37.588887 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:35:37.593232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:35:37.600758 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:35:37.601217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:35:37.614418 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:35:37.616126 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:37.616330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:37.617334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:37.618525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:37.620541 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:35:37.623470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:37.632416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:35:37.633067 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:37.660042 augenrules[1378]: No rules Feb 13 15:35:37.662062 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:35:37.663041 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:35:37.674716 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:35:37.713683 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:35:37.715190 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:35:37.763465 systemd-resolved[1329]: Positive Trust Anchors: Feb 13 15:35:37.763544 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:35:37.763577 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:35:37.774548 systemd-networkd[1369]: lo: Link UP Feb 13 15:35:37.774556 systemd-networkd[1369]: lo: Gained carrier Feb 13 15:35:37.777051 systemd-resolved[1329]: Using system hostname 'ci-4152-2-1-1-287b7b51cc'. Feb 13 15:35:37.777897 systemd-networkd[1369]: Enumeration completed Feb 13 15:35:37.778035 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:35:37.793509 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:35:37.794977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:35:37.797449 systemd[1]: Reached target network.target - Network. Feb 13 15:35:37.798077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:37.812223 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:35:37.872169 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:37.872185 systemd-networkd[1369]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:37.874046 systemd-networkd[1369]: eth1: Link UP Feb 13 15:35:37.874059 systemd-networkd[1369]: eth1: Gained carrier Feb 13 15:35:37.874082 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:37.907315 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1368) Feb 13 15:35:37.908310 systemd-networkd[1369]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:35:37.909055 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 15:35:37.914605 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:37.914618 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:37.915540 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 15:35:37.917522 systemd-networkd[1369]: eth0: Link UP Feb 13 15:35:37.917534 systemd-networkd[1369]: eth0: Gained carrier Feb 13 15:35:37.917559 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:37.924964 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 15:35:37.944159 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:35:37.977348 systemd-networkd[1369]: eth0: DHCPv4 address 78.46.147.231/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:35:37.977891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:35:37.977975 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 15:35:37.982518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:35:38.004057 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:35:38.004229 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:35:38.004292 kernel: [drm] features: -context_init Feb 13 15:35:38.008778 kernel: [drm] number of scanouts: 1 Feb 13 15:35:38.008869 kernel: [drm] number of cap sets: 0 Feb 13 15:35:38.007239 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:35:38.012238 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:35:38.014688 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:35:38.014820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:38.021516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:38.029436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:38.032851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:38.034214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:38.034308 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:35:38.041167 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:35:38.042507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:38.043405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:38.056513 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:35:38.061785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:38.061952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:38.066595 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:38.066787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:38.075482 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:38.075537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:38.084867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:38.159415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:38.225968 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:35:38.234681 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:35:38.252582 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:35:38.282627 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:35:38.284635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:38.285999 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:35:38.287715 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:35:38.288661 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:35:38.289680 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:35:38.290443 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:35:38.291114 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:35:38.291813 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:35:38.291850 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:35:38.292375 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:35:38.293618 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:35:38.295871 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:35:38.301355 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:35:38.305483 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:35:38.306942 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:35:38.307806 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:35:38.308491 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:35:38.309022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:35:38.309057 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:35:38.312345 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:35:38.316415 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:35:38.322510 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:35:38.322627 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:35:38.327985 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:35:38.332374 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:35:38.333075 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:35:38.338473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:35:38.340762 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:35:38.344607 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:35:38.351289 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:35:38.356482 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:35:38.360343 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:35:38.365473 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:35:38.366013 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:35:38.367855 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:35:38.372411 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:35:38.383444 jq[1448]: false Feb 13 15:35:38.390571 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:35:38.396519 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:35:38.399411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:35:38.411894 extend-filesystems[1449]: Found loop4 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found loop5 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found loop6 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found loop7 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda1 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda2 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda3 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found usr Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda4 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda6 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda7 Feb 13 15:35:38.411894 extend-filesystems[1449]: Found sda9 Feb 13 15:35:38.411894 extend-filesystems[1449]: Checking size of /dev/sda9 Feb 13 15:35:38.481179 coreos-metadata[1446]: Feb 13 15:35:38.435 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:35:38.481179 coreos-metadata[1446]: Feb 13 15:35:38.439 INFO Fetch successful Feb 13 15:35:38.481179 coreos-metadata[1446]: Feb 13 15:35:38.441 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:35:38.481179 coreos-metadata[1446]: Feb 13 15:35:38.442 INFO Fetch successful Feb 13 15:35:38.424592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:35:38.424812 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:35:38.448570 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:35:38.448798 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:35:38.482097 jq[1460]: true Feb 13 15:35:38.490463 tar[1462]: linux-arm64/helm Feb 13 15:35:38.489601 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:35:38.490973 extend-filesystems[1449]: Resized partition /dev/sda9 Feb 13 15:35:38.494613 jq[1482]: true Feb 13 15:35:38.507617 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:35:38.502123 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:35:38.501781 dbus-daemon[1447]: [system] SELinux support is enabled Feb 13 15:35:38.507488 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:35:38.507516 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:35:38.509656 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:35:38.509680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:35:38.529165 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:35:38.549051 update_engine[1456]: I20250213 15:35:38.548815 1456 main.cc:92] Flatcar Update Engine starting Feb 13 15:35:38.565359 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:35:38.570627 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:35:38.573784 update_engine[1456]: I20250213 15:35:38.572731 1456 update_check_scheduler.cc:74] Next update check in 5m57s Feb 13 15:35:38.642641 systemd-logind[1455]: New seat seat0. Feb 13 15:35:38.647977 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:35:38.648045 systemd-logind[1455]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:35:38.651572 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1397) Feb 13 15:35:38.694000 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:35:38.714361 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:35:38.717775 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:35:38.719045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:35:38.719704 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:35:38.763278 systemd[1]: Starting sshkeys.service... Feb 13 15:35:38.771327 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:35:38.810607 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:35:38.824885 containerd[1479]: time="2025-02-13T15:35:38.824785880Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:35:38.837586 extend-filesystems[1491]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:35:38.837586 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:35:38.837586 extend-filesystems[1491]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:35:38.840218 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Feb 13 15:35:38.840218 extend-filesystems[1449]: Found sr0 Feb 13 15:35:38.837670 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:35:38.841621 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:35:38.841838 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:35:38.902397 containerd[1479]: time="2025-02-13T15:35:38.902344160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.907019 coreos-metadata[1525]: Feb 13 15:35:38.906 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.906972880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908091080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908124160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908351800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908375880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908438560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908453600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908648960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908662800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908675200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909178 containerd[1479]: time="2025-02-13T15:35:38.908684320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909554 containerd[1479]: time="2025-02-13T15:35:38.908751720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909554 containerd[1479]: time="2025-02-13T15:35:38.908944680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909554 containerd[1479]: time="2025-02-13T15:35:38.909037280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:38.909554 containerd[1479]: time="2025-02-13T15:35:38.909050440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:35:38.909554 containerd[1479]: time="2025-02-13T15:35:38.909120080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:35:38.909644 coreos-metadata[1525]: Feb 13 15:35:38.909 INFO Fetch successful Feb 13 15:35:38.911528 containerd[1479]: time="2025-02-13T15:35:38.911489600Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:35:38.911844 unknown[1525]: wrote ssh authorized keys file for user: core Feb 13 15:35:38.923559 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:35:38.925500 containerd[1479]: time="2025-02-13T15:35:38.924235240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:35:38.925500 containerd[1479]: time="2025-02-13T15:35:38.924318480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:35:38.925500 containerd[1479]: time="2025-02-13T15:35:38.924339280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:35:38.925500 containerd[1479]: time="2025-02-13T15:35:38.924359720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:35:38.925500 containerd[1479]: time="2025-02-13T15:35:38.924397160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:35:38.925500 containerd[1479]: time="2025-02-13T15:35:38.924574920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:35:38.928785 containerd[1479]: time="2025-02-13T15:35:38.928735080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:35:38.929769 containerd[1479]: time="2025-02-13T15:35:38.929735600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:35:38.929816 containerd[1479]: time="2025-02-13T15:35:38.929773480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:35:38.929816 containerd[1479]: time="2025-02-13T15:35:38.929791200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:35:38.929875 containerd[1479]: time="2025-02-13T15:35:38.929819160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929875 containerd[1479]: time="2025-02-13T15:35:38.929834520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929875 containerd[1479]: time="2025-02-13T15:35:38.929847400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929875 containerd[1479]: time="2025-02-13T15:35:38.929862480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929939 containerd[1479]: time="2025-02-13T15:35:38.929877680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929939 containerd[1479]: time="2025-02-13T15:35:38.929898320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929939 containerd[1479]: time="2025-02-13T15:35:38.929911280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.929939 containerd[1479]: time="2025-02-13T15:35:38.929922320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:35:38.930010 containerd[1479]: time="2025-02-13T15:35:38.929945160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931245880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931308520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931331160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931345160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931437080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931519160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931542160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931556560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931574560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931596120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931612560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931627240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931642920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931678600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932160 containerd[1479]: time="2025-02-13T15:35:38.931739360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932479 containerd[1479]: time="2025-02-13T15:35:38.931773720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:35:38.932479 containerd[1479]: time="2025-02-13T15:35:38.932071480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:35:38.932479 containerd[1479]: time="2025-02-13T15:35:38.932106160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:35:38.932479 containerd[1479]: time="2025-02-13T15:35:38.932118680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:35:38.932732 containerd[1479]: time="2025-02-13T15:35:38.932131800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:35:38.932762 containerd[1479]: time="2025-02-13T15:35:38.932730840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.932785 containerd[1479]: time="2025-02-13T15:35:38.932752360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:35:38.932785 containerd[1479]: time="2025-02-13T15:35:38.932777920Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:35:38.932822 containerd[1479]: time="2025-02-13T15:35:38.932795480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:35:38.933476 containerd[1479]: time="2025-02-13T15:35:38.933405880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:35:38.933624 containerd[1479]: time="2025-02-13T15:35:38.933477320Z" level=info msg="Connect containerd service" Feb 13 15:35:38.933624 containerd[1479]: time="2025-02-13T15:35:38.933537720Z" level=info msg="using legacy CRI server" Feb 13 15:35:38.933624 containerd[1479]: time="2025-02-13T15:35:38.933546400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:35:38.933862 containerd[1479]: time="2025-02-13T15:35:38.933836000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935435480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935655680Z" level=info msg="Start subscribing containerd event" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935724040Z" level=info msg="Start recovering state" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935806360Z" level=info msg="Start event monitor" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935818840Z" level=info msg="Start snapshots syncer" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935828480Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.935838040Z" level=info msg="Start streaming server" Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.936084040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.936163240Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:35:38.938152 containerd[1479]: time="2025-02-13T15:35:38.936267720Z" level=info msg="containerd successfully booted in 0.112977s" Feb 13 15:35:38.938835 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:35:38.955759 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:35:38.958208 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:35:38.964361 systemd[1]: Finished sshkeys.service. Feb 13 15:35:38.981918 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:35:39.004084 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:35:39.014910 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:35:39.027110 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:35:39.027529 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:35:39.038080 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:35:39.049332 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:35:39.059356 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:35:39.068655 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:35:39.070486 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:35:39.110369 systemd-networkd[1369]: eth0: Gained IPv6LL Feb 13 15:35:39.111015 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 15:35:39.117100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:35:39.119197 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:35:39.131400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:39.134546 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:35:39.181333 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:35:39.245756 tar[1462]: linux-arm64/LICENSE Feb 13 15:35:39.245847 tar[1462]: linux-arm64/README.md Feb 13 15:35:39.259204 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:35:39.750672 systemd-networkd[1369]: eth1: Gained IPv6LL Feb 13 15:35:39.751597 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 15:35:39.890502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:39.892898 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:35:39.897621 systemd[1]: Startup finished in 821ms (kernel) + 5.905s (initrd) + 4.331s (userspace) = 11.058s. Feb 13 15:35:39.902784 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:40.524101 kubelet[1577]: E0213 15:35:40.524010 1577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:40.526599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:40.526760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:50.662214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:50.673653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:50.788375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:50.793928 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:50.846768 kubelet[1597]: E0213 15:35:50.846719 1597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:50.852045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:50.852331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:00.911734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:36:00.921525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:01.040531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:01.042526 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:01.092084 kubelet[1613]: E0213 15:36:01.092009 1613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:01.094411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:01.094562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:09.487399 systemd-resolved[1329]: Clock change detected. Flushing caches. Feb 13 15:36:09.487674 systemd-timesyncd[1350]: Contacted time server 217.14.146.53:123 (2.flatcar.pool.ntp.org). Feb 13 15:36:09.487777 systemd-timesyncd[1350]: Initial clock synchronization to Thu 2025-02-13 15:36:09.487295 UTC. Feb 13 15:36:10.724243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:36:10.732838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:10.852253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:10.863947 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:10.923738 kubelet[1629]: E0213 15:36:10.923668 1629 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:10.927068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:10.927928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:20.049538 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:36:20.055785 systemd[1]: Started sshd@0-78.46.147.231:22-183.63.103.84:34957.service - OpenSSH per-connection server daemon (183.63.103.84:34957). Feb 13 15:36:20.974634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:36:20.988386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:21.135332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:21.141175 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:21.188948 kubelet[1647]: E0213 15:36:21.188846 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:21.192332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:21.192518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:23.448456 update_engine[1456]: I20250213 15:36:23.448318 1456 update_attempter.cc:509] Updating boot flags... Feb 13 15:36:23.497503 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1664) Feb 13 15:36:23.558493 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1660) Feb 13 15:36:23.616515 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1660) Feb 13 15:36:31.225396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:36:31.231980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:31.379712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:31.381186 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:31.429549 kubelet[1684]: E0213 15:36:31.429486 1684 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:31.432718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:31.432856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:41.475208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:36:41.488965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:41.624769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:41.626236 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:41.683143 kubelet[1700]: E0213 15:36:41.683087 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:41.686659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:41.686926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:41.990828 systemd[1]: Started sshd@1-78.46.147.231:22-39.99.212.219:42420.service - OpenSSH per-connection server daemon (39.99.212.219:42420). Feb 13 15:36:51.724609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:36:51.732908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:51.864370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:51.877169 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:51.932641 kubelet[1719]: E0213 15:36:51.932544 1719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:51.934902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:51.935045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:01.975024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:37:01.982148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:02.109276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:02.115225 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:02.167895 kubelet[1735]: E0213 15:37:02.167728 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:02.170147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:02.170289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:12.224642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 15:37:12.230912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:12.362747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:12.367314 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:12.416813 kubelet[1751]: E0213 15:37:12.416661 1751 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:12.418844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:12.418969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:13.350803 systemd[1]: Started sshd@2-78.46.147.231:22-101.126.78.108:28018.service - OpenSSH per-connection server daemon (101.126.78.108:28018). Feb 13 15:37:19.655066 sshd[1760]: kex_exchange_identification: read: Connection reset by peer Feb 13 15:37:19.655066 sshd[1760]: Connection reset by 101.126.78.108 port 28018 Feb 13 15:37:19.656334 systemd[1]: sshd@2-78.46.147.231:22-101.126.78.108:28018.service: Deactivated successfully. Feb 13 15:37:22.475069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 15:37:22.487741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:22.622668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:22.639528 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:22.693085 kubelet[1771]: E0213 15:37:22.692896 1771 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:22.695357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:22.695603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:25.444054 systemd[1]: Started sshd@3-78.46.147.231:22-39.99.212.219:57614.service - OpenSSH per-connection server daemon (39.99.212.219:57614). Feb 13 15:37:28.753668 systemd[1]: Started sshd@4-78.46.147.231:22-183.63.103.84:14697.service - OpenSSH per-connection server daemon (183.63.103.84:14697). Feb 13 15:37:32.724231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 15:37:32.734862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:32.860555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:32.875091 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:32.927092 kubelet[1791]: E0213 15:37:32.927030 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:32.930030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:32.930191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:33.622945 systemd[1]: Started sshd@5-78.46.147.231:22-139.178.89.65:43710.service - OpenSSH per-connection server daemon (139.178.89.65:43710). Feb 13 15:37:34.615377 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 43710 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:34.619888 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:34.633298 systemd-logind[1455]: New session 1 of user core. Feb 13 15:37:34.633807 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:37:34.640615 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:37:34.655200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:37:34.663960 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:37:34.668420 (systemd)[1804]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:37:34.785819 systemd[1804]: Queued start job for default target default.target. Feb 13 15:37:34.799886 systemd[1804]: Created slice app.slice - User Application Slice. Feb 13 15:37:34.799977 systemd[1804]: Reached target paths.target - Paths. Feb 13 15:37:34.800010 systemd[1804]: Reached target timers.target - Timers. Feb 13 15:37:34.802850 systemd[1804]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:37:34.820407 systemd[1804]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:37:34.820499 systemd[1804]: Reached target sockets.target - Sockets. Feb 13 15:37:34.820511 systemd[1804]: Reached target basic.target - Basic System. Feb 13 15:37:34.820557 systemd[1804]: Reached target default.target - Main User Target. Feb 13 15:37:34.820585 systemd[1804]: Startup finished in 144ms. Feb 13 15:37:34.820702 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:37:34.832821 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:37:35.537830 systemd[1]: Started sshd@6-78.46.147.231:22-139.178.89.65:39222.service - OpenSSH per-connection server daemon (139.178.89.65:39222). Feb 13 15:37:36.529663 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 39222 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:36.531970 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:36.539164 systemd-logind[1455]: New session 2 of user core. Feb 13 15:37:36.545803 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:37:37.218355 sshd[1817]: Connection closed by 139.178.89.65 port 39222 Feb 13 15:37:37.220244 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:37.224809 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:37:37.225603 systemd[1]: sshd@6-78.46.147.231:22-139.178.89.65:39222.service: Deactivated successfully. Feb 13 15:37:37.227686 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:37:37.229059 systemd-logind[1455]: Removed session 2. Feb 13 15:37:37.393478 systemd[1]: Started sshd@7-78.46.147.231:22-139.178.89.65:39234.service - OpenSSH per-connection server daemon (139.178.89.65:39234). Feb 13 15:37:38.382834 sshd[1822]: Accepted publickey for core from 139.178.89.65 port 39234 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:38.385498 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:38.393980 systemd-logind[1455]: New session 3 of user core. Feb 13 15:37:38.400792 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:37:39.061854 sshd[1824]: Connection closed by 139.178.89.65 port 39234 Feb 13 15:37:39.063189 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:39.069127 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:37:39.070064 systemd[1]: sshd@7-78.46.147.231:22-139.178.89.65:39234.service: Deactivated successfully. Feb 13 15:37:39.072380 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:37:39.073801 systemd-logind[1455]: Removed session 3. Feb 13 15:37:39.236860 systemd[1]: Started sshd@8-78.46.147.231:22-139.178.89.65:39244.service - OpenSSH per-connection server daemon (139.178.89.65:39244). Feb 13 15:37:40.249330 sshd[1829]: Accepted publickey for core from 139.178.89.65 port 39244 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:40.252978 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:40.259602 systemd-logind[1455]: New session 4 of user core. Feb 13 15:37:40.266919 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:37:40.941989 sshd[1831]: Connection closed by 139.178.89.65 port 39244 Feb 13 15:37:40.941341 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:40.945173 systemd[1]: sshd@8-78.46.147.231:22-139.178.89.65:39244.service: Deactivated successfully. Feb 13 15:37:40.947610 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:37:40.950611 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:37:40.952188 systemd-logind[1455]: Removed session 4. Feb 13 15:37:41.110853 systemd[1]: Started sshd@9-78.46.147.231:22-139.178.89.65:39260.service - OpenSSH per-connection server daemon (139.178.89.65:39260). Feb 13 15:37:42.100029 sshd[1836]: Accepted publickey for core from 139.178.89.65 port 39260 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:42.102082 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:42.109944 systemd-logind[1455]: New session 5 of user core. Feb 13 15:37:42.117785 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:37:42.636142 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:37:42.637047 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:42.656065 sudo[1839]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:42.816462 sshd[1838]: Connection closed by 139.178.89.65 port 39260 Feb 13 15:37:42.817962 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:42.824332 systemd[1]: sshd@9-78.46.147.231:22-139.178.89.65:39260.service: Deactivated successfully. Feb 13 15:37:42.828403 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:37:42.832239 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:37:42.833574 systemd-logind[1455]: Removed session 5. Feb 13 15:37:42.974908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 15:37:42.981907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:42.989335 systemd[1]: Started sshd@10-78.46.147.231:22-139.178.89.65:39262.service - OpenSSH per-connection server daemon (139.178.89.65:39262). Feb 13 15:37:43.117694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:43.136431 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:43.186107 kubelet[1854]: E0213 15:37:43.186043 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:43.189300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:43.189524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:43.992576 sshd[1845]: Accepted publickey for core from 139.178.89.65 port 39262 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:43.994962 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:44.002211 systemd-logind[1455]: New session 6 of user core. Feb 13 15:37:44.011863 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:37:44.525022 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:37:44.525374 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:44.530254 sudo[1864]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:44.537115 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:37:44.537502 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:44.553795 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:37:44.596871 augenrules[1886]: No rules Feb 13 15:37:44.598183 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:37:44.598368 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:37:44.600039 sudo[1863]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:44.761253 sshd[1862]: Connection closed by 139.178.89.65 port 39262 Feb 13 15:37:44.762340 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:44.769547 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:37:44.769821 systemd[1]: sshd@10-78.46.147.231:22-139.178.89.65:39262.service: Deactivated successfully. Feb 13 15:37:44.772349 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:37:44.773361 systemd-logind[1455]: Removed session 6. Feb 13 15:37:44.932346 systemd[1]: Started sshd@11-78.46.147.231:22-139.178.89.65:46664.service - OpenSSH per-connection server daemon (139.178.89.65:46664). Feb 13 15:37:45.916104 sshd[1894]: Accepted publickey for core from 139.178.89.65 port 46664 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:45.918056 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:45.924294 systemd-logind[1455]: New session 7 of user core. Feb 13 15:37:45.934867 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:37:46.437334 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:37:46.437672 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:46.787924 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:37:46.789405 (dockerd)[1915]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:37:47.091783 dockerd[1915]: time="2025-02-13T15:37:47.091211671Z" level=info msg="Starting up" Feb 13 15:37:47.241500 dockerd[1915]: time="2025-02-13T15:37:47.240680902Z" level=info msg="Loading containers: start." Feb 13 15:37:47.475613 kernel: Initializing XFRM netlink socket Feb 13 15:37:47.589570 systemd-networkd[1369]: docker0: Link UP Feb 13 15:37:47.642738 dockerd[1915]: time="2025-02-13T15:37:47.642672006Z" level=info msg="Loading containers: done." Feb 13 15:37:47.666275 dockerd[1915]: time="2025-02-13T15:37:47.665794379Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:37:47.666275 dockerd[1915]: time="2025-02-13T15:37:47.665972542Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:37:47.667324 dockerd[1915]: time="2025-02-13T15:37:47.666855678Z" level=info msg="Daemon has completed initialization" Feb 13 15:37:47.726531 dockerd[1915]: time="2025-02-13T15:37:47.726339741Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:37:47.727140 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:37:48.953358 containerd[1479]: time="2025-02-13T15:37:48.953210875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:37:49.626324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322893556.mount: Deactivated successfully. Feb 13 15:37:50.581474 containerd[1479]: time="2025-02-13T15:37:50.581224898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:50.582568 containerd[1479]: time="2025-02-13T15:37:50.582514479Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865299" Feb 13 15:37:50.583778 containerd[1479]: time="2025-02-13T15:37:50.583694138Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:50.591077 containerd[1479]: time="2025-02-13T15:37:50.590365728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:50.592319 containerd[1479]: time="2025-02-13T15:37:50.592249959Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.63886756s" Feb 13 15:37:50.592319 containerd[1479]: time="2025-02-13T15:37:50.592315560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:37:50.621388 containerd[1479]: time="2025-02-13T15:37:50.621340117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:37:51.936482 containerd[1479]: time="2025-02-13T15:37:51.936383596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:51.939559 containerd[1479]: time="2025-02-13T15:37:51.939484765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898614" Feb 13 15:37:51.942652 containerd[1479]: time="2025-02-13T15:37:51.942592375Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:51.950499 containerd[1479]: time="2025-02-13T15:37:51.950260657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:51.951926 containerd[1479]: time="2025-02-13T15:37:51.951867683Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.330169321s" Feb 13 15:37:51.952185 containerd[1479]: time="2025-02-13T15:37:51.952066446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:37:51.981711 containerd[1479]: time="2025-02-13T15:37:51.981342394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:37:52.952663 containerd[1479]: time="2025-02-13T15:37:52.951868534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:52.958652 containerd[1479]: time="2025-02-13T15:37:52.958380075Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164954" Feb 13 15:37:52.962480 containerd[1479]: time="2025-02-13T15:37:52.962359697Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:52.973098 containerd[1479]: time="2025-02-13T15:37:52.972858340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:52.975250 containerd[1479]: time="2025-02-13T15:37:52.975073135Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 993.680379ms" Feb 13 15:37:52.975250 containerd[1479]: time="2025-02-13T15:37:52.975129495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:37:53.002748 containerd[1479]: time="2025-02-13T15:37:53.002704124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:37:53.224516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 15:37:53.230879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:53.381410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:53.393365 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:53.451927 kubelet[2192]: E0213 15:37:53.451805 2192 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:53.455846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:53.456220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:54.006533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279236202.mount: Deactivated successfully. Feb 13 15:37:54.396832 containerd[1479]: time="2025-02-13T15:37:54.396594685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:54.398728 containerd[1479]: time="2025-02-13T15:37:54.398463072Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663396" Feb 13 15:37:54.399975 containerd[1479]: time="2025-02-13T15:37:54.399847573Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:54.403549 containerd[1479]: time="2025-02-13T15:37:54.403487026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:54.405158 containerd[1479]: time="2025-02-13T15:37:54.404883607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.402130363s" Feb 13 15:37:54.405158 containerd[1479]: time="2025-02-13T15:37:54.405030929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:37:54.432771 containerd[1479]: time="2025-02-13T15:37:54.432714616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:37:54.785711 systemd[1]: Started sshd@12-78.46.147.231:22-39.99.212.219:38460.service - OpenSSH per-connection server daemon (39.99.212.219:38460). Feb 13 15:37:55.040431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952550742.mount: Deactivated successfully. Feb 13 15:37:55.730652 containerd[1479]: time="2025-02-13T15:37:55.730563787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:55.732859 containerd[1479]: time="2025-02-13T15:37:55.732786499Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:37:55.734486 containerd[1479]: time="2025-02-13T15:37:55.734400362Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:55.739403 containerd[1479]: time="2025-02-13T15:37:55.739300593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:55.741391 containerd[1479]: time="2025-02-13T15:37:55.741200820Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.308414963s" Feb 13 15:37:55.741391 containerd[1479]: time="2025-02-13T15:37:55.741257701Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:37:55.766839 containerd[1479]: time="2025-02-13T15:37:55.766781706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:37:56.315705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374696227.mount: Deactivated successfully. Feb 13 15:37:56.325010 containerd[1479]: time="2025-02-13T15:37:56.324047285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:56.325686 containerd[1479]: time="2025-02-13T15:37:56.325634787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:37:56.328049 containerd[1479]: time="2025-02-13T15:37:56.327979340Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:56.331609 containerd[1479]: time="2025-02-13T15:37:56.331524509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:56.333377 containerd[1479]: time="2025-02-13T15:37:56.332770487Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 565.864979ms" Feb 13 15:37:56.333377 containerd[1479]: time="2025-02-13T15:37:56.332831008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:37:56.356352 containerd[1479]: time="2025-02-13T15:37:56.356313935Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:37:56.944073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581821269.mount: Deactivated successfully. Feb 13 15:37:58.331327 containerd[1479]: time="2025-02-13T15:37:58.329573584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:58.331327 containerd[1479]: time="2025-02-13T15:37:58.331086484Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Feb 13 15:37:58.331327 containerd[1479]: time="2025-02-13T15:37:58.331268686Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:58.334700 containerd[1479]: time="2025-02-13T15:37:58.334652531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:58.336329 containerd[1479]: time="2025-02-13T15:37:58.336289552Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.979935457s" Feb 13 15:37:58.336517 containerd[1479]: time="2025-02-13T15:37:58.336493595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:38:03.475856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Feb 13 15:38:03.484803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:03.732842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:03.734341 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:03.785269 kubelet[2383]: E0213 15:38:03.785226 2383 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:03.788496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:03.788785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:03.975991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:03.982982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:04.010888 systemd[1]: Reloading requested from client PID 2397 ('systemctl') (unit session-7.scope)... Feb 13 15:38:04.011075 systemd[1]: Reloading... Feb 13 15:38:04.146586 zram_generator::config[2448]: No configuration found. Feb 13 15:38:04.259548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:04.331690 systemd[1]: Reloading finished in 320 ms. Feb 13 15:38:04.388710 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:38:04.388845 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:38:04.389243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:04.395912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:04.517744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:04.521727 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:38:04.573505 kubelet[2495]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:38:04.573505 kubelet[2495]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:38:04.573505 kubelet[2495]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:38:04.573505 kubelet[2495]: I0213 15:38:04.572582 2495 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:38:05.361046 kubelet[2495]: I0213 15:38:05.360932 2495 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:38:05.361046 kubelet[2495]: I0213 15:38:05.361031 2495 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:38:05.361502 kubelet[2495]: I0213 15:38:05.361400 2495 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:38:05.381502 kubelet[2495]: I0213 15:38:05.381257 2495 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:38:05.383464 kubelet[2495]: E0213 15:38:05.381731 2495 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.46.147.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.392672 kubelet[2495]: I0213 15:38:05.392639 2495 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:38:05.393314 kubelet[2495]: I0213 15:38:05.393283 2495 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:38:05.393650 kubelet[2495]: I0213 15:38:05.393385 2495 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-1-1-287b7b51cc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:38:05.393865 kubelet[2495]: I0213 15:38:05.393850 2495 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:38:05.393919 kubelet[2495]: I0213 15:38:05.393910 2495 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:38:05.394284 kubelet[2495]: I0213 15:38:05.394269 2495 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:38:05.395651 kubelet[2495]: I0213 15:38:05.395628 2495 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:38:05.395742 kubelet[2495]: I0213 15:38:05.395732 2495 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:38:05.396008 kubelet[2495]: I0213 15:38:05.395995 2495 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:38:05.396078 kubelet[2495]: I0213 15:38:05.396068 2495 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:38:05.396756 kubelet[2495]: W0213 15:38:05.396673 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.147.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-287b7b51cc&limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.396814 kubelet[2495]: E0213 15:38:05.396780 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.46.147.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-287b7b51cc&limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.397318 kubelet[2495]: W0213 15:38:05.397280 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.147.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.397479 kubelet[2495]: E0213 15:38:05.397402 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.46.147.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.398513 kubelet[2495]: I0213 15:38:05.397696 2495 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:38:05.398513 kubelet[2495]: I0213 15:38:05.398084 2495 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:38:05.398513 kubelet[2495]: W0213 15:38:05.398124 2495 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:38:05.399566 kubelet[2495]: I0213 15:38:05.399545 2495 server.go:1264] "Started kubelet" Feb 13 15:38:05.404313 kubelet[2495]: I0213 15:38:05.404281 2495 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:38:05.407595 kubelet[2495]: E0213 15:38:05.407340 2495 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.46.147.231:6443/api/v1/namespaces/default/events\": dial tcp 78.46.147.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-1-1-287b7b51cc.1823cea41fda9098 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-1-1-287b7b51cc,UID:ci-4152-2-1-1-287b7b51cc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-1-287b7b51cc,},FirstTimestamp:2025-02-13 15:38:05.399519384 +0000 UTC m=+0.873201166,LastTimestamp:2025-02-13 15:38:05.399519384 +0000 UTC m=+0.873201166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-1-287b7b51cc,}" Feb 13 15:38:05.411670 kubelet[2495]: I0213 15:38:05.411610 2495 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:38:05.412690 kubelet[2495]: I0213 15:38:05.412654 2495 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:38:05.413515 kubelet[2495]: I0213 15:38:05.412927 2495 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:38:05.413635 kubelet[2495]: I0213 15:38:05.413568 2495 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:38:05.413837 kubelet[2495]: I0213 15:38:05.413811 2495 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:38:05.414146 kubelet[2495]: E0213 15:38:05.414098 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.147.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-287b7b51cc?timeout=10s\": dial tcp 78.46.147.231:6443: connect: connection refused" interval="200ms" Feb 13 15:38:05.414371 kubelet[2495]: I0213 15:38:05.414347 2495 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:38:05.415376 kubelet[2495]: W0213 15:38:05.415328 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.147.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.415525 kubelet[2495]: E0213 15:38:05.415511 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.46.147.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.415899 kubelet[2495]: I0213 15:38:05.415853 2495 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:38:05.416508 kubelet[2495]: I0213 15:38:05.416107 2495 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:38:05.416508 kubelet[2495]: E0213 15:38:05.416377 2495 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:38:05.417502 kubelet[2495]: I0213 15:38:05.417488 2495 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:38:05.417760 kubelet[2495]: I0213 15:38:05.417740 2495 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:38:05.432855 kubelet[2495]: I0213 15:38:05.432811 2495 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:38:05.434489 kubelet[2495]: I0213 15:38:05.434210 2495 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:38:05.434489 kubelet[2495]: I0213 15:38:05.434368 2495 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:38:05.434489 kubelet[2495]: I0213 15:38:05.434388 2495 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:38:05.434489 kubelet[2495]: E0213 15:38:05.434448 2495 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:38:05.442548 kubelet[2495]: W0213 15:38:05.442479 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.147.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.442548 kubelet[2495]: E0213 15:38:05.442551 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.46.147.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:05.458010 kubelet[2495]: I0213 15:38:05.457947 2495 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:38:05.458010 kubelet[2495]: I0213 15:38:05.458006 2495 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:38:05.458156 kubelet[2495]: I0213 15:38:05.458043 2495 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:38:05.460828 kubelet[2495]: I0213 15:38:05.460786 2495 policy_none.go:49] "None policy: Start" Feb 13 15:38:05.461950 kubelet[2495]: I0213 15:38:05.461921 2495 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:38:05.462084 kubelet[2495]: I0213 15:38:05.462074 2495 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:38:05.469421 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:38:05.485129 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:38:05.490127 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:38:05.499123 kubelet[2495]: I0213 15:38:05.498292 2495 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:38:05.499123 kubelet[2495]: I0213 15:38:05.498600 2495 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:38:05.499123 kubelet[2495]: I0213 15:38:05.498740 2495 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:38:05.501869 kubelet[2495]: E0213 15:38:05.501557 2495 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-1-1-287b7b51cc\" not found" Feb 13 15:38:05.517194 kubelet[2495]: I0213 15:38:05.517101 2495 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.517796 kubelet[2495]: E0213 15:38:05.517756 2495 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.147.231:6443/api/v1/nodes\": dial tcp 78.46.147.231:6443: connect: connection refused" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.535532 kubelet[2495]: I0213 15:38:05.534938 2495 topology_manager.go:215] "Topology Admit Handler" podUID="2b02b9626d3d1ee3180a539c076a773d" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.538418 kubelet[2495]: I0213 15:38:05.538369 2495 topology_manager.go:215] "Topology Admit Handler" podUID="a8f96248904a51aeb3af1271a672bda7" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.540752 kubelet[2495]: I0213 15:38:05.540409 2495 topology_manager.go:215] "Topology Admit Handler" podUID="2b172336914cdc5ffce6308f5fbc5890" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.548983 systemd[1]: Created slice kubepods-burstable-pod2b02b9626d3d1ee3180a539c076a773d.slice - libcontainer container kubepods-burstable-pod2b02b9626d3d1ee3180a539c076a773d.slice. Feb 13 15:38:05.563654 systemd[1]: Created slice kubepods-burstable-poda8f96248904a51aeb3af1271a672bda7.slice - libcontainer container kubepods-burstable-poda8f96248904a51aeb3af1271a672bda7.slice. Feb 13 15:38:05.586185 systemd[1]: Created slice kubepods-burstable-pod2b172336914cdc5ffce6308f5fbc5890.slice - libcontainer container kubepods-burstable-pod2b172336914cdc5ffce6308f5fbc5890.slice. Feb 13 15:38:05.616231 kubelet[2495]: E0213 15:38:05.615936 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.147.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-287b7b51cc?timeout=10s\": dial tcp 78.46.147.231:6443: connect: connection refused" interval="400ms" Feb 13 15:38:05.618313 kubelet[2495]: I0213 15:38:05.618262 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b02b9626d3d1ee3180a539c076a773d-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b02b9626d3d1ee3180a539c076a773d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618313 kubelet[2495]: I0213 15:38:05.618306 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b02b9626d3d1ee3180a539c076a773d-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b02b9626d3d1ee3180a539c076a773d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618489 kubelet[2495]: I0213 15:38:05.618327 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b02b9626d3d1ee3180a539c076a773d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b02b9626d3d1ee3180a539c076a773d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618489 kubelet[2495]: I0213 15:38:05.618353 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618489 kubelet[2495]: I0213 15:38:05.618373 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618489 kubelet[2495]: I0213 15:38:05.618393 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b172336914cdc5ffce6308f5fbc5890-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b172336914cdc5ffce6308f5fbc5890\") " pod="kube-system/kube-scheduler-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618489 kubelet[2495]: I0213 15:38:05.618411 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618631 kubelet[2495]: I0213 15:38:05.618430 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.618631 kubelet[2495]: I0213 15:38:05.618466 2495 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.721847 kubelet[2495]: I0213 15:38:05.721318 2495 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.721847 kubelet[2495]: E0213 15:38:05.721742 2495 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.147.231:6443/api/v1/nodes\": dial tcp 78.46.147.231:6443: connect: connection refused" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:05.863731 containerd[1479]: time="2025-02-13T15:38:05.863661361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-1-287b7b51cc,Uid:2b02b9626d3d1ee3180a539c076a773d,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:05.881852 containerd[1479]: time="2025-02-13T15:38:05.881699919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-1-287b7b51cc,Uid:a8f96248904a51aeb3af1271a672bda7,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:05.892497 containerd[1479]: time="2025-02-13T15:38:05.892348756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-1-287b7b51cc,Uid:2b172336914cdc5ffce6308f5fbc5890,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:06.017117 kubelet[2495]: E0213 15:38:06.017066 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.147.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-287b7b51cc?timeout=10s\": dial tcp 78.46.147.231:6443: connect: connection refused" interval="800ms" Feb 13 15:38:06.124368 kubelet[2495]: I0213 15:38:06.124337 2495 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:06.124746 kubelet[2495]: E0213 15:38:06.124721 2495 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.147.231:6443/api/v1/nodes\": dial tcp 78.46.147.231:6443: connect: connection refused" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:06.244467 kubelet[2495]: W0213 15:38:06.244329 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.147.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.244467 kubelet[2495]: E0213 15:38:06.244412 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.46.147.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.330521 systemd[1]: Started sshd@13-78.46.147.231:22-101.126.78.108:36858.service - OpenSSH per-connection server daemon (101.126.78.108:36858). Feb 13 15:38:06.399842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263997780.mount: Deactivated successfully. Feb 13 15:38:06.408572 containerd[1479]: time="2025-02-13T15:38:06.407730302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:38:06.411172 containerd[1479]: time="2025-02-13T15:38:06.411121098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:38:06.411804 containerd[1479]: time="2025-02-13T15:38:06.411778065Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:38:06.414172 containerd[1479]: time="2025-02-13T15:38:06.414133290Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:38:06.416125 containerd[1479]: time="2025-02-13T15:38:06.416077871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:38:06.416930 containerd[1479]: time="2025-02-13T15:38:06.416341394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:38:06.416930 containerd[1479]: time="2025-02-13T15:38:06.416359154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:38:06.420056 containerd[1479]: time="2025-02-13T15:38:06.419996473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:38:06.421306 containerd[1479]: time="2025-02-13T15:38:06.421033884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.601687ms" Feb 13 15:38:06.422809 containerd[1479]: time="2025-02-13T15:38:06.422747822Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.176574ms" Feb 13 15:38:06.425429 containerd[1479]: time="2025-02-13T15:38:06.425372050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.576368ms" Feb 13 15:38:06.490096 kubelet[2495]: W0213 15:38:06.489808 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.147.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.490096 kubelet[2495]: E0213 15:38:06.490043 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.46.147.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.558186 containerd[1479]: time="2025-02-13T15:38:06.557394743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:06.558415 containerd[1479]: time="2025-02-13T15:38:06.557602665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:06.558415 containerd[1479]: time="2025-02-13T15:38:06.557630186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.559745 containerd[1479]: time="2025-02-13T15:38:06.559628647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:06.559900 containerd[1479]: time="2025-02-13T15:38:06.559704648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:06.559900 containerd[1479]: time="2025-02-13T15:38:06.559720168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.560085 containerd[1479]: time="2025-02-13T15:38:06.559833809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.561037 containerd[1479]: time="2025-02-13T15:38:06.560867540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.565585 containerd[1479]: time="2025-02-13T15:38:06.565414589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:06.565926 containerd[1479]: time="2025-02-13T15:38:06.565660072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:06.565926 containerd[1479]: time="2025-02-13T15:38:06.565676072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.566366 containerd[1479]: time="2025-02-13T15:38:06.566323959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.591522 systemd[1]: Started cri-containerd-6b999b8c8a25463674d8aed0b5e1882e48ac1d44e9c726c7629c8cbfe0f3c733.scope - libcontainer container 6b999b8c8a25463674d8aed0b5e1882e48ac1d44e9c726c7629c8cbfe0f3c733. Feb 13 15:38:06.603066 systemd[1]: Started cri-containerd-45e225e8f5689936a08578b526764d801fde64b39d46055234842e04d4959916.scope - libcontainer container 45e225e8f5689936a08578b526764d801fde64b39d46055234842e04d4959916. Feb 13 15:38:06.606669 systemd[1]: Started cri-containerd-e4353177d61ad0f168a62e04a8dcfbf24b3219981aec80cecacb0e9b777f212b.scope - libcontainer container e4353177d61ad0f168a62e04a8dcfbf24b3219981aec80cecacb0e9b777f212b. Feb 13 15:38:06.648865 kubelet[2495]: W0213 15:38:06.648760 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.147.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-287b7b51cc&limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.649489 kubelet[2495]: E0213 15:38:06.649108 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.46.147.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-287b7b51cc&limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.671480 containerd[1479]: time="2025-02-13T15:38:06.671111040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-1-287b7b51cc,Uid:2b02b9626d3d1ee3180a539c076a773d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4353177d61ad0f168a62e04a8dcfbf24b3219981aec80cecacb0e9b777f212b\"" Feb 13 15:38:06.678643 containerd[1479]: time="2025-02-13T15:38:06.678591680Z" level=info msg="CreateContainer within sandbox \"e4353177d61ad0f168a62e04a8dcfbf24b3219981aec80cecacb0e9b777f212b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:38:06.679003 containerd[1479]: time="2025-02-13T15:38:06.678688481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-1-287b7b51cc,Uid:2b172336914cdc5ffce6308f5fbc5890,Namespace:kube-system,Attempt:0,} returns sandbox id \"45e225e8f5689936a08578b526764d801fde64b39d46055234842e04d4959916\"" Feb 13 15:38:06.684487 containerd[1479]: time="2025-02-13T15:38:06.684285221Z" level=info msg="CreateContainer within sandbox \"45e225e8f5689936a08578b526764d801fde64b39d46055234842e04d4959916\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:38:06.689088 containerd[1479]: time="2025-02-13T15:38:06.688650388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-1-287b7b51cc,Uid:a8f96248904a51aeb3af1271a672bda7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b999b8c8a25463674d8aed0b5e1882e48ac1d44e9c726c7629c8cbfe0f3c733\"" Feb 13 15:38:06.693760 containerd[1479]: time="2025-02-13T15:38:06.693711202Z" level=info msg="CreateContainer within sandbox \"6b999b8c8a25463674d8aed0b5e1882e48ac1d44e9c726c7629c8cbfe0f3c733\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:38:06.705888 containerd[1479]: time="2025-02-13T15:38:06.705833892Z" level=info msg="CreateContainer within sandbox \"45e225e8f5689936a08578b526764d801fde64b39d46055234842e04d4959916\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"632f915e4fefc48f1cc9c6d6710a0d613e6dc5c6e15fd5aa150a776b377979da\"" Feb 13 15:38:06.707485 containerd[1479]: time="2025-02-13T15:38:06.707228147Z" level=info msg="StartContainer for \"632f915e4fefc48f1cc9c6d6710a0d613e6dc5c6e15fd5aa150a776b377979da\"" Feb 13 15:38:06.709501 containerd[1479]: time="2025-02-13T15:38:06.709345769Z" level=info msg="CreateContainer within sandbox \"e4353177d61ad0f168a62e04a8dcfbf24b3219981aec80cecacb0e9b777f212b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cbc5c05d37e72132e849a00c26d759f206db25a71ea08a31d31356efb357a8dd\"" Feb 13 15:38:06.713984 containerd[1479]: time="2025-02-13T15:38:06.712760286Z" level=info msg="StartContainer for \"cbc5c05d37e72132e849a00c26d759f206db25a71ea08a31d31356efb357a8dd\"" Feb 13 15:38:06.719263 containerd[1479]: time="2025-02-13T15:38:06.719213795Z" level=info msg="CreateContainer within sandbox \"6b999b8c8a25463674d8aed0b5e1882e48ac1d44e9c726c7629c8cbfe0f3c733\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1547955717edf23003ba369f883449f9f09a5becdae7859a5a4d4c43e23ae57\"" Feb 13 15:38:06.719806 containerd[1479]: time="2025-02-13T15:38:06.719777841Z" level=info msg="StartContainer for \"d1547955717edf23003ba369f883449f9f09a5becdae7859a5a4d4c43e23ae57\"" Feb 13 15:38:06.746623 systemd[1]: Started cri-containerd-632f915e4fefc48f1cc9c6d6710a0d613e6dc5c6e15fd5aa150a776b377979da.scope - libcontainer container 632f915e4fefc48f1cc9c6d6710a0d613e6dc5c6e15fd5aa150a776b377979da. Feb 13 15:38:06.766062 systemd[1]: Started cri-containerd-cbc5c05d37e72132e849a00c26d759f206db25a71ea08a31d31356efb357a8dd.scope - libcontainer container cbc5c05d37e72132e849a00c26d759f206db25a71ea08a31d31356efb357a8dd. Feb 13 15:38:06.771353 systemd[1]: Started cri-containerd-d1547955717edf23003ba369f883449f9f09a5becdae7859a5a4d4c43e23ae57.scope - libcontainer container d1547955717edf23003ba369f883449f9f09a5becdae7859a5a4d4c43e23ae57. Feb 13 15:38:06.820693 kubelet[2495]: E0213 15:38:06.819489 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.147.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-287b7b51cc?timeout=10s\": dial tcp 78.46.147.231:6443: connect: connection refused" interval="1.6s" Feb 13 15:38:06.827541 containerd[1479]: time="2025-02-13T15:38:06.826178579Z" level=info msg="StartContainer for \"632f915e4fefc48f1cc9c6d6710a0d613e6dc5c6e15fd5aa150a776b377979da\" returns successfully" Feb 13 15:38:06.843518 containerd[1479]: time="2025-02-13T15:38:06.842773637Z" level=info msg="StartContainer for \"cbc5c05d37e72132e849a00c26d759f206db25a71ea08a31d31356efb357a8dd\" returns successfully" Feb 13 15:38:06.851787 containerd[1479]: time="2025-02-13T15:38:06.851617612Z" level=info msg="StartContainer for \"d1547955717edf23003ba369f883449f9f09a5becdae7859a5a4d4c43e23ae57\" returns successfully" Feb 13 15:38:06.929199 kubelet[2495]: I0213 15:38:06.929135 2495 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:06.929607 kubelet[2495]: E0213 15:38:06.929568 2495 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.147.231:6443/api/v1/nodes\": dial tcp 78.46.147.231:6443: connect: connection refused" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:06.940136 kubelet[2495]: W0213 15:38:06.940020 2495 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.147.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:06.940136 kubelet[2495]: E0213 15:38:06.940096 2495 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.46.147.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.147.231:6443: connect: connection refused Feb 13 15:38:08.533451 kubelet[2495]: I0213 15:38:08.532402 2495 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:09.225034 kubelet[2495]: E0213 15:38:09.224925 2495 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-1-1-287b7b51cc\" not found" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:09.280253 kubelet[2495]: I0213 15:38:09.280207 2495 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:09.410342 kubelet[2495]: I0213 15:38:09.410302 2495 apiserver.go:52] "Watching apiserver" Feb 13 15:38:09.515228 kubelet[2495]: I0213 15:38:09.515107 2495 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:38:11.473854 systemd[1]: Reloading requested from client PID 2777 ('systemctl') (unit session-7.scope)... Feb 13 15:38:11.474307 systemd[1]: Reloading... Feb 13 15:38:11.598483 zram_generator::config[2832]: No configuration found. Feb 13 15:38:11.711525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:11.795725 systemd[1]: Reloading finished in 321 ms. Feb 13 15:38:11.838680 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:11.839573 kubelet[2495]: I0213 15:38:11.839135 2495 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:38:11.853119 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:38:11.853420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:11.853522 systemd[1]: kubelet.service: Consumed 1.306s CPU time, 112.7M memory peak, 0B memory swap peak. Feb 13 15:38:11.863926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:11.999799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:12.005054 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:38:12.063513 kubelet[2874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:38:12.063513 kubelet[2874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:38:12.063513 kubelet[2874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:38:12.063513 kubelet[2874]: I0213 15:38:12.062492 2874 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:38:12.067890 kubelet[2874]: I0213 15:38:12.067853 2874 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:38:12.067890 kubelet[2874]: I0213 15:38:12.067882 2874 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:38:12.068259 kubelet[2874]: I0213 15:38:12.068219 2874 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:38:12.070102 kubelet[2874]: I0213 15:38:12.070066 2874 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:38:12.071714 kubelet[2874]: I0213 15:38:12.071480 2874 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:38:12.082124 kubelet[2874]: I0213 15:38:12.082088 2874 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:38:12.083366 kubelet[2874]: I0213 15:38:12.083328 2874 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:38:12.084017 kubelet[2874]: I0213 15:38:12.083507 2874 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-1-1-287b7b51cc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:38:12.084532 kubelet[2874]: I0213 15:38:12.084342 2874 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:38:12.084776 kubelet[2874]: I0213 15:38:12.084751 2874 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:38:12.085658 kubelet[2874]: I0213 15:38:12.085020 2874 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:38:12.085658 kubelet[2874]: I0213 15:38:12.085159 2874 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:38:12.085658 kubelet[2874]: I0213 15:38:12.085173 2874 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:38:12.085658 kubelet[2874]: I0213 15:38:12.085205 2874 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:38:12.085658 kubelet[2874]: I0213 15:38:12.085229 2874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:38:12.091186 kubelet[2874]: I0213 15:38:12.090994 2874 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:38:12.091186 kubelet[2874]: I0213 15:38:12.091198 2874 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:38:12.095538 kubelet[2874]: I0213 15:38:12.094626 2874 server.go:1264] "Started kubelet" Feb 13 15:38:12.103662 kubelet[2874]: I0213 15:38:12.101579 2874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:38:12.103662 kubelet[2874]: I0213 15:38:12.101915 2874 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:38:12.103662 kubelet[2874]: I0213 15:38:12.101964 2874 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:38:12.103662 kubelet[2874]: I0213 15:38:12.103337 2874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:38:12.105743 kubelet[2874]: I0213 15:38:12.105234 2874 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:38:12.112477 kubelet[2874]: I0213 15:38:12.111411 2874 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:38:12.112689 kubelet[2874]: I0213 15:38:12.112663 2874 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:38:12.112854 kubelet[2874]: I0213 15:38:12.112839 2874 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:38:12.120677 kubelet[2874]: I0213 15:38:12.120627 2874 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:38:12.130482 kubelet[2874]: I0213 15:38:12.130422 2874 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:38:12.159538 kubelet[2874]: I0213 15:38:12.159507 2874 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:38:12.163040 kubelet[2874]: I0213 15:38:12.162988 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:38:12.165477 kubelet[2874]: I0213 15:38:12.164396 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:38:12.165477 kubelet[2874]: I0213 15:38:12.164477 2874 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:38:12.165477 kubelet[2874]: I0213 15:38:12.164501 2874 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:38:12.165477 kubelet[2874]: E0213 15:38:12.164547 2874 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:38:12.221157 kubelet[2874]: I0213 15:38:12.221116 2874 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.233241 kubelet[2874]: I0213 15:38:12.232021 2874 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.233241 kubelet[2874]: I0213 15:38:12.232136 2874 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.264758 kubelet[2874]: E0213 15:38:12.264598 2874 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:38:12.265081 kubelet[2874]: I0213 15:38:12.265057 2874 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:38:12.265081 kubelet[2874]: I0213 15:38:12.265075 2874 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:38:12.265149 kubelet[2874]: I0213 15:38:12.265099 2874 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:38:12.265266 kubelet[2874]: I0213 15:38:12.265246 2874 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:38:12.265291 kubelet[2874]: I0213 15:38:12.265263 2874 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:38:12.265291 kubelet[2874]: I0213 15:38:12.265285 2874 policy_none.go:49] "None policy: Start" Feb 13 15:38:12.266097 kubelet[2874]: I0213 15:38:12.266054 2874 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:38:12.267851 kubelet[2874]: I0213 15:38:12.266087 2874 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:38:12.267851 kubelet[2874]: I0213 15:38:12.267753 2874 state_mem.go:75] "Updated machine memory state" Feb 13 15:38:12.274424 kubelet[2874]: I0213 15:38:12.274393 2874 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:38:12.275088 kubelet[2874]: I0213 15:38:12.274930 2874 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:38:12.275520 kubelet[2874]: I0213 15:38:12.275500 2874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:38:12.465575 kubelet[2874]: I0213 15:38:12.465171 2874 topology_manager.go:215] "Topology Admit Handler" podUID="2b02b9626d3d1ee3180a539c076a773d" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.465575 kubelet[2874]: I0213 15:38:12.465356 2874 topology_manager.go:215] "Topology Admit Handler" podUID="a8f96248904a51aeb3af1271a672bda7" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.466795 kubelet[2874]: I0213 15:38:12.465424 2874 topology_manager.go:215] "Topology Admit Handler" podUID="2b172336914cdc5ffce6308f5fbc5890" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.477868 kubelet[2874]: E0213 15:38:12.477817 2874 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.480038 sudo[2906]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:38:12.480342 sudo[2906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:38:12.515333 kubelet[2874]: I0213 15:38:12.515022 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515333 kubelet[2874]: I0213 15:38:12.515069 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515333 kubelet[2874]: I0213 15:38:12.515091 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515333 kubelet[2874]: I0213 15:38:12.515109 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515333 kubelet[2874]: I0213 15:38:12.515139 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b172336914cdc5ffce6308f5fbc5890-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b172336914cdc5ffce6308f5fbc5890\") " pod="kube-system/kube-scheduler-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515669 kubelet[2874]: I0213 15:38:12.515155 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b02b9626d3d1ee3180a539c076a773d-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b02b9626d3d1ee3180a539c076a773d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515669 kubelet[2874]: I0213 15:38:12.515169 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b02b9626d3d1ee3180a539c076a773d-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b02b9626d3d1ee3180a539c076a773d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515669 kubelet[2874]: I0213 15:38:12.515188 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b02b9626d3d1ee3180a539c076a773d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" (UID: \"2b02b9626d3d1ee3180a539c076a773d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.515669 kubelet[2874]: I0213 15:38:12.515205 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8f96248904a51aeb3af1271a672bda7-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" (UID: \"a8f96248904a51aeb3af1271a672bda7\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:12.968168 sudo[2906]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:13.101270 kubelet[2874]: I0213 15:38:13.099783 2874 apiserver.go:52] "Watching apiserver" Feb 13 15:38:13.113157 kubelet[2874]: I0213 15:38:13.113116 2874 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:38:13.222565 kubelet[2874]: E0213 15:38:13.222357 2874 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-1-287b7b51cc\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:13.225313 kubelet[2874]: E0213 15:38:13.225186 2874 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152-2-1-1-287b7b51cc\" already exists" pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" Feb 13 15:38:13.244334 kubelet[2874]: I0213 15:38:13.244277 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-1-1-287b7b51cc" podStartSLOduration=2.244240286 podStartE2EDuration="2.244240286s" podCreationTimestamp="2025-02-13 15:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:13.240615693 +0000 UTC m=+1.231558997" watchObservedRunningTime="2025-02-13 15:38:13.244240286 +0000 UTC m=+1.235183630" Feb 13 15:38:13.275519 kubelet[2874]: I0213 15:38:13.274923 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-1-1-287b7b51cc" podStartSLOduration=1.274893361 podStartE2EDuration="1.274893361s" podCreationTimestamp="2025-02-13 15:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:13.260416631 +0000 UTC m=+1.251360015" watchObservedRunningTime="2025-02-13 15:38:13.274893361 +0000 UTC m=+1.265836705" Feb 13 15:38:13.276053 kubelet[2874]: I0213 15:38:13.275355 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-1-1-287b7b51cc" podStartSLOduration=1.275345285 podStartE2EDuration="1.275345285s" podCreationTimestamp="2025-02-13 15:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:13.275279804 +0000 UTC m=+1.266223148" watchObservedRunningTime="2025-02-13 15:38:13.275345285 +0000 UTC m=+1.266288669" Feb 13 15:38:14.912003 sudo[1897]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:15.070620 sshd[1896]: Connection closed by 139.178.89.65 port 46664 Feb 13 15:38:15.072265 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:15.077734 systemd[1]: sshd@11-78.46.147.231:22-139.178.89.65:46664.service: Deactivated successfully. Feb 13 15:38:15.081093 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:38:15.081508 systemd[1]: session-7.scope: Consumed 8.059s CPU time, 189.4M memory peak, 0B memory swap peak. Feb 13 15:38:15.084280 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:38:15.085828 systemd-logind[1455]: Removed session 7. Feb 13 15:38:20.086763 systemd[1]: sshd@0-78.46.147.231:22-183.63.103.84:34957.service: Deactivated successfully. Feb 13 15:38:25.337086 kubelet[2874]: I0213 15:38:25.337018 2874 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:38:25.338543 kubelet[2874]: I0213 15:38:25.338359 2874 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:38:25.338644 containerd[1479]: time="2025-02-13T15:38:25.337858626Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:38:26.071753 kubelet[2874]: I0213 15:38:26.071687 2874 topology_manager.go:215] "Topology Admit Handler" podUID="db49ed7c-5dc0-454f-b9e5-299015b63dd0" podNamespace="kube-system" podName="kube-proxy-tqgmv" Feb 13 15:38:26.084652 systemd[1]: Created slice kubepods-besteffort-poddb49ed7c_5dc0_454f_b9e5_299015b63dd0.slice - libcontainer container kubepods-besteffort-poddb49ed7c_5dc0_454f_b9e5_299015b63dd0.slice. Feb 13 15:38:26.109547 kubelet[2874]: I0213 15:38:26.109333 2874 topology_manager.go:215] "Topology Admit Handler" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" podNamespace="kube-system" podName="cilium-mvkn4" Feb 13 15:38:26.122652 systemd[1]: Created slice kubepods-burstable-podd5811e8e_9422_48f1_9fb5_b8967311d069.slice - libcontainer container kubepods-burstable-podd5811e8e_9422_48f1_9fb5_b8967311d069.slice. Feb 13 15:38:26.202899 kubelet[2874]: I0213 15:38:26.202668 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db49ed7c-5dc0-454f-b9e5-299015b63dd0-kube-proxy\") pod \"kube-proxy-tqgmv\" (UID: \"db49ed7c-5dc0-454f-b9e5-299015b63dd0\") " pod="kube-system/kube-proxy-tqgmv" Feb 13 15:38:26.202899 kubelet[2874]: I0213 15:38:26.202723 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db49ed7c-5dc0-454f-b9e5-299015b63dd0-lib-modules\") pod \"kube-proxy-tqgmv\" (UID: \"db49ed7c-5dc0-454f-b9e5-299015b63dd0\") " pod="kube-system/kube-proxy-tqgmv" Feb 13 15:38:26.202899 kubelet[2874]: I0213 15:38:26.202767 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwsx\" (UniqueName: \"kubernetes.io/projected/db49ed7c-5dc0-454f-b9e5-299015b63dd0-kube-api-access-2kwsx\") pod \"kube-proxy-tqgmv\" (UID: \"db49ed7c-5dc0-454f-b9e5-299015b63dd0\") " pod="kube-system/kube-proxy-tqgmv" Feb 13 15:38:26.202899 kubelet[2874]: I0213 15:38:26.202798 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db49ed7c-5dc0-454f-b9e5-299015b63dd0-xtables-lock\") pod \"kube-proxy-tqgmv\" (UID: \"db49ed7c-5dc0-454f-b9e5-299015b63dd0\") " pod="kube-system/kube-proxy-tqgmv" Feb 13 15:38:26.304296 kubelet[2874]: I0213 15:38:26.303900 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-net\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.305597 kubelet[2874]: I0213 15:38:26.304779 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-hubble-tls\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.305597 kubelet[2874]: I0213 15:38:26.305423 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5811e8e-9422-48f1-9fb5-b8967311d069-clustermesh-secrets\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.305597 kubelet[2874]: I0213 15:38:26.305550 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-hostproc\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.307448 kubelet[2874]: I0213 15:38:26.306247 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-etc-cni-netd\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.307448 kubelet[2874]: I0213 15:38:26.306352 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-xtables-lock\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.307448 kubelet[2874]: I0213 15:38:26.306429 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbxc\" (UniqueName: \"kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-kube-api-access-jrbxc\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.307448 kubelet[2874]: I0213 15:38:26.306577 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-kernel\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.307448 kubelet[2874]: I0213 15:38:26.306659 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-cgroup\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.307448 kubelet[2874]: I0213 15:38:26.306703 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-bpf-maps\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.308385 kubelet[2874]: I0213 15:38:26.306784 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-lib-modules\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.308385 kubelet[2874]: I0213 15:38:26.306838 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-config-path\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.308385 kubelet[2874]: I0213 15:38:26.306885 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-run\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.308385 kubelet[2874]: I0213 15:38:26.306929 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cni-path\") pod \"cilium-mvkn4\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " pod="kube-system/cilium-mvkn4" Feb 13 15:38:26.404300 containerd[1479]: time="2025-02-13T15:38:26.403875193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqgmv,Uid:db49ed7c-5dc0-454f-b9e5-299015b63dd0,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:26.437653 kubelet[2874]: I0213 15:38:26.436243 2874 topology_manager.go:215] "Topology Admit Handler" podUID="5c424093-db8f-4e53-8928-ed9369b8ba7f" podNamespace="kube-system" podName="cilium-operator-599987898-7pq5m" Feb 13 15:38:26.454320 systemd[1]: Created slice kubepods-besteffort-pod5c424093_db8f_4e53_8928_ed9369b8ba7f.slice - libcontainer container kubepods-besteffort-pod5c424093_db8f_4e53_8928_ed9369b8ba7f.slice. Feb 13 15:38:26.467653 containerd[1479]: time="2025-02-13T15:38:26.467506095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:26.467653 containerd[1479]: time="2025-02-13T15:38:26.467591135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:26.467653 containerd[1479]: time="2025-02-13T15:38:26.467601856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:26.468367 containerd[1479]: time="2025-02-13T15:38:26.468321460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:26.507456 systemd[1]: Started cri-containerd-7c5e09c90f5563b243659188adcc2761c148470c9810715a742f55c7e538400c.scope - libcontainer container 7c5e09c90f5563b243659188adcc2761c148470c9810715a742f55c7e538400c. Feb 13 15:38:26.547764 containerd[1479]: time="2025-02-13T15:38:26.547657546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqgmv,Uid:db49ed7c-5dc0-454f-b9e5-299015b63dd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c5e09c90f5563b243659188adcc2761c148470c9810715a742f55c7e538400c\"" Feb 13 15:38:26.553467 containerd[1479]: time="2025-02-13T15:38:26.553292544Z" level=info msg="CreateContainer within sandbox \"7c5e09c90f5563b243659188adcc2761c148470c9810715a742f55c7e538400c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:38:26.572906 containerd[1479]: time="2025-02-13T15:38:26.572761553Z" level=info msg="CreateContainer within sandbox \"7c5e09c90f5563b243659188adcc2761c148470c9810715a742f55c7e538400c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60baedf9d266c21b914511e274cadbd9c3921d5410fe34e69ce020e67b50c87f\"" Feb 13 15:38:26.576481 containerd[1479]: time="2025-02-13T15:38:26.574816926Z" level=info msg="StartContainer for \"60baedf9d266c21b914511e274cadbd9c3921d5410fe34e69ce020e67b50c87f\"" Feb 13 15:38:26.604023 systemd[1]: Started cri-containerd-60baedf9d266c21b914511e274cadbd9c3921d5410fe34e69ce020e67b50c87f.scope - libcontainer container 60baedf9d266c21b914511e274cadbd9c3921d5410fe34e69ce020e67b50c87f. Feb 13 15:38:26.610645 kubelet[2874]: I0213 15:38:26.610593 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jbh\" (UniqueName: \"kubernetes.io/projected/5c424093-db8f-4e53-8928-ed9369b8ba7f-kube-api-access-98jbh\") pod \"cilium-operator-599987898-7pq5m\" (UID: \"5c424093-db8f-4e53-8928-ed9369b8ba7f\") " pod="kube-system/cilium-operator-599987898-7pq5m" Feb 13 15:38:26.610645 kubelet[2874]: I0213 15:38:26.610647 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c424093-db8f-4e53-8928-ed9369b8ba7f-cilium-config-path\") pod \"cilium-operator-599987898-7pq5m\" (UID: \"5c424093-db8f-4e53-8928-ed9369b8ba7f\") " pod="kube-system/cilium-operator-599987898-7pq5m" Feb 13 15:38:26.643054 containerd[1479]: time="2025-02-13T15:38:26.643006339Z" level=info msg="StartContainer for \"60baedf9d266c21b914511e274cadbd9c3921d5410fe34e69ce020e67b50c87f\" returns successfully" Feb 13 15:38:26.729986 containerd[1479]: time="2025-02-13T15:38:26.729929515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mvkn4,Uid:d5811e8e-9422-48f1-9fb5-b8967311d069,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:26.762524 containerd[1479]: time="2025-02-13T15:38:26.760005074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7pq5m,Uid:5c424093-db8f-4e53-8928-ed9369b8ba7f,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:26.772886 containerd[1479]: time="2025-02-13T15:38:26.770123422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:26.772886 containerd[1479]: time="2025-02-13T15:38:26.770926307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:26.772886 containerd[1479]: time="2025-02-13T15:38:26.770942427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:26.772886 containerd[1479]: time="2025-02-13T15:38:26.771062748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:26.798697 systemd[1]: Started cri-containerd-71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207.scope - libcontainer container 71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207. Feb 13 15:38:26.820428 containerd[1479]: time="2025-02-13T15:38:26.820258154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:26.821299 containerd[1479]: time="2025-02-13T15:38:26.820466275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:26.821632 containerd[1479]: time="2025-02-13T15:38:26.821474362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:26.821821 containerd[1479]: time="2025-02-13T15:38:26.821739124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:26.850398 systemd[1]: Started cri-containerd-070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f.scope - libcontainer container 070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f. Feb 13 15:38:26.854313 containerd[1479]: time="2025-02-13T15:38:26.853879297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mvkn4,Uid:d5811e8e-9422-48f1-9fb5-b8967311d069,Namespace:kube-system,Attempt:0,} returns sandbox id \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\"" Feb 13 15:38:26.860999 containerd[1479]: time="2025-02-13T15:38:26.859555375Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:38:26.890194 containerd[1479]: time="2025-02-13T15:38:26.890078897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7pq5m,Uid:5c424093-db8f-4e53-8928-ed9369b8ba7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\"" Feb 13 15:38:33.823004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043027515.mount: Deactivated successfully. Feb 13 15:38:35.338007 containerd[1479]: time="2025-02-13T15:38:35.337901568Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:35.339573 containerd[1479]: time="2025-02-13T15:38:35.339400096Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:38:35.340552 containerd[1479]: time="2025-02-13T15:38:35.340488462Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:35.344792 containerd[1479]: time="2025-02-13T15:38:35.344252882Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.483140697s" Feb 13 15:38:35.344792 containerd[1479]: time="2025-02-13T15:38:35.344303723Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:38:35.348598 containerd[1479]: time="2025-02-13T15:38:35.346697416Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:38:35.350204 containerd[1479]: time="2025-02-13T15:38:35.350041914Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:38:35.367222 containerd[1479]: time="2025-02-13T15:38:35.367161928Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\"" Feb 13 15:38:35.370431 containerd[1479]: time="2025-02-13T15:38:35.370347026Z" level=info msg="StartContainer for \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\"" Feb 13 15:38:35.431679 systemd[1]: Started cri-containerd-934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4.scope - libcontainer container 934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4. Feb 13 15:38:35.434903 systemd[1]: Started sshd@14-78.46.147.231:22-183.63.103.84:58206.service - OpenSSH per-connection server daemon (183.63.103.84:58206). Feb 13 15:38:35.483786 containerd[1479]: time="2025-02-13T15:38:35.483724648Z" level=info msg="StartContainer for \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\" returns successfully" Feb 13 15:38:35.498664 systemd[1]: cri-containerd-934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4.scope: Deactivated successfully. Feb 13 15:38:35.688403 containerd[1479]: time="2025-02-13T15:38:35.688165251Z" level=info msg="shim disconnected" id=934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4 namespace=k8s.io Feb 13 15:38:35.688403 containerd[1479]: time="2025-02-13T15:38:35.688232171Z" level=warning msg="cleaning up after shim disconnected" id=934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4 namespace=k8s.io Feb 13 15:38:35.688403 containerd[1479]: time="2025-02-13T15:38:35.688243651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:36.275888 containerd[1479]: time="2025-02-13T15:38:36.275242965Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:38:36.299208 containerd[1479]: time="2025-02-13T15:38:36.299126653Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\"" Feb 13 15:38:36.303192 containerd[1479]: time="2025-02-13T15:38:36.300654661Z" level=info msg="StartContainer for \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\"" Feb 13 15:38:36.306890 kubelet[2874]: I0213 15:38:36.306808 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqgmv" podStartSLOduration=10.306791054 podStartE2EDuration="10.306791054s" podCreationTimestamp="2025-02-13 15:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:27.26112164 +0000 UTC m=+15.252064984" watchObservedRunningTime="2025-02-13 15:38:36.306791054 +0000 UTC m=+24.297734398" Feb 13 15:38:36.332677 systemd[1]: Started cri-containerd-ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51.scope - libcontainer container ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51. Feb 13 15:38:36.362098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4-rootfs.mount: Deactivated successfully. Feb 13 15:38:36.372759 containerd[1479]: time="2025-02-13T15:38:36.372527728Z" level=info msg="StartContainer for \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\" returns successfully" Feb 13 15:38:36.387139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:36.387362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:36.387419 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:36.396752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:36.397017 systemd[1]: cri-containerd-ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51.scope: Deactivated successfully. Feb 13 15:38:36.422245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:36.427041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51-rootfs.mount: Deactivated successfully. Feb 13 15:38:36.437109 containerd[1479]: time="2025-02-13T15:38:36.436862875Z" level=info msg="shim disconnected" id=ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51 namespace=k8s.io Feb 13 15:38:36.437109 containerd[1479]: time="2025-02-13T15:38:36.436952195Z" level=warning msg="cleaning up after shim disconnected" id=ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51 namespace=k8s.io Feb 13 15:38:36.437109 containerd[1479]: time="2025-02-13T15:38:36.436990395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:37.279313 containerd[1479]: time="2025-02-13T15:38:37.279252220Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:38:37.311640 containerd[1479]: time="2025-02-13T15:38:37.311589671Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\"" Feb 13 15:38:37.312833 containerd[1479]: time="2025-02-13T15:38:37.312778077Z" level=info msg="StartContainer for \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\"" Feb 13 15:38:37.345700 systemd[1]: Started cri-containerd-f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035.scope - libcontainer container f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035. Feb 13 15:38:37.393228 containerd[1479]: time="2025-02-13T15:38:37.392980020Z" level=info msg="StartContainer for \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\" returns successfully" Feb 13 15:38:37.397030 systemd[1]: cri-containerd-f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035.scope: Deactivated successfully. Feb 13 15:38:37.427251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035-rootfs.mount: Deactivated successfully. Feb 13 15:38:37.432209 containerd[1479]: time="2025-02-13T15:38:37.432139547Z" level=info msg="shim disconnected" id=f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035 namespace=k8s.io Feb 13 15:38:37.432209 containerd[1479]: time="2025-02-13T15:38:37.432202547Z" level=warning msg="cleaning up after shim disconnected" id=f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035 namespace=k8s.io Feb 13 15:38:37.432209 containerd[1479]: time="2025-02-13T15:38:37.432211227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:38.290542 containerd[1479]: time="2025-02-13T15:38:38.290077766Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:38:38.310525 containerd[1479]: time="2025-02-13T15:38:38.310407872Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\"" Feb 13 15:38:38.311921 containerd[1479]: time="2025-02-13T15:38:38.311630198Z" level=info msg="StartContainer for \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\"" Feb 13 15:38:38.341661 systemd[1]: Started cri-containerd-5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c.scope - libcontainer container 5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c. Feb 13 15:38:38.371128 systemd[1]: cri-containerd-5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c.scope: Deactivated successfully. Feb 13 15:38:38.376976 containerd[1479]: time="2025-02-13T15:38:38.376887416Z" level=info msg="StartContainer for \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\" returns successfully" Feb 13 15:38:38.398420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c-rootfs.mount: Deactivated successfully. Feb 13 15:38:38.406502 containerd[1479]: time="2025-02-13T15:38:38.406145647Z" level=info msg="shim disconnected" id=5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c namespace=k8s.io Feb 13 15:38:38.406502 containerd[1479]: time="2025-02-13T15:38:38.406270528Z" level=warning msg="cleaning up after shim disconnected" id=5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c namespace=k8s.io Feb 13 15:38:38.406502 containerd[1479]: time="2025-02-13T15:38:38.406280968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:39.298570 containerd[1479]: time="2025-02-13T15:38:39.298384637Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:38:39.326908 containerd[1479]: time="2025-02-13T15:38:39.326843902Z" level=info msg="CreateContainer within sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\"" Feb 13 15:38:39.331130 containerd[1479]: time="2025-02-13T15:38:39.328203909Z" level=info msg="StartContainer for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\"" Feb 13 15:38:39.381303 systemd[1]: run-containerd-runc-k8s.io-ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c-runc.Hq6zqy.mount: Deactivated successfully. Feb 13 15:38:39.398673 systemd[1]: Started cri-containerd-ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c.scope - libcontainer container ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c. Feb 13 15:38:39.436400 containerd[1479]: time="2025-02-13T15:38:39.436335898Z" level=info msg="StartContainer for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" returns successfully" Feb 13 15:38:39.459244 systemd[1]: run-containerd-runc-k8s.io-ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c-runc.W5Un2j.mount: Deactivated successfully. Feb 13 15:38:39.600703 kubelet[2874]: I0213 15:38:39.600402 2874 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:38:39.634541 kubelet[2874]: I0213 15:38:39.633827 2874 topology_manager.go:215] "Topology Admit Handler" podUID="46d8298c-33c1-4db7-866e-801d44a71f1d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jmx9r" Feb 13 15:38:39.635427 kubelet[2874]: I0213 15:38:39.635363 2874 topology_manager.go:215] "Topology Admit Handler" podUID="879fd5f1-a4a7-484b-9b3f-a2a99115c50f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-twtjp" Feb 13 15:38:39.647991 systemd[1]: Created slice kubepods-burstable-pod46d8298c_33c1_4db7_866e_801d44a71f1d.slice - libcontainer container kubepods-burstable-pod46d8298c_33c1_4db7_866e_801d44a71f1d.slice. Feb 13 15:38:39.662622 systemd[1]: Created slice kubepods-burstable-pod879fd5f1_a4a7_484b_9b3f_a2a99115c50f.slice - libcontainer container kubepods-burstable-pod879fd5f1_a4a7_484b_9b3f_a2a99115c50f.slice. Feb 13 15:38:39.809272 kubelet[2874]: I0213 15:38:39.809022 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/879fd5f1-a4a7-484b-9b3f-a2a99115c50f-config-volume\") pod \"coredns-7db6d8ff4d-twtjp\" (UID: \"879fd5f1-a4a7-484b-9b3f-a2a99115c50f\") " pod="kube-system/coredns-7db6d8ff4d-twtjp" Feb 13 15:38:39.809272 kubelet[2874]: I0213 15:38:39.809076 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgzw\" (UniqueName: \"kubernetes.io/projected/46d8298c-33c1-4db7-866e-801d44a71f1d-kube-api-access-4cgzw\") pod \"coredns-7db6d8ff4d-jmx9r\" (UID: \"46d8298c-33c1-4db7-866e-801d44a71f1d\") " pod="kube-system/coredns-7db6d8ff4d-jmx9r" Feb 13 15:38:39.809272 kubelet[2874]: I0213 15:38:39.809102 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc85n\" (UniqueName: \"kubernetes.io/projected/879fd5f1-a4a7-484b-9b3f-a2a99115c50f-kube-api-access-xc85n\") pod \"coredns-7db6d8ff4d-twtjp\" (UID: \"879fd5f1-a4a7-484b-9b3f-a2a99115c50f\") " pod="kube-system/coredns-7db6d8ff4d-twtjp" Feb 13 15:38:39.809272 kubelet[2874]: I0213 15:38:39.809126 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d8298c-33c1-4db7-866e-801d44a71f1d-config-volume\") pod \"coredns-7db6d8ff4d-jmx9r\" (UID: \"46d8298c-33c1-4db7-866e-801d44a71f1d\") " pod="kube-system/coredns-7db6d8ff4d-jmx9r" Feb 13 15:38:39.954685 containerd[1479]: time="2025-02-13T15:38:39.953941047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmx9r,Uid:46d8298c-33c1-4db7-866e-801d44a71f1d,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:39.971007 containerd[1479]: time="2025-02-13T15:38:39.969165045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-twtjp,Uid:879fd5f1-a4a7-484b-9b3f-a2a99115c50f,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:40.867620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818023489.mount: Deactivated successfully. Feb 13 15:38:41.234587 containerd[1479]: time="2025-02-13T15:38:41.234501213Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:41.236253 containerd[1479]: time="2025-02-13T15:38:41.236184941Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:38:41.237319 containerd[1479]: time="2025-02-13T15:38:41.237256467Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:41.238552 containerd[1479]: time="2025-02-13T15:38:41.238493113Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.891756417s" Feb 13 15:38:41.238552 containerd[1479]: time="2025-02-13T15:38:41.238538473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:38:41.243400 containerd[1479]: time="2025-02-13T15:38:41.243270616Z" level=info msg="CreateContainer within sandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:38:41.267163 containerd[1479]: time="2025-02-13T15:38:41.267116853Z" level=info msg="CreateContainer within sandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\"" Feb 13 15:38:41.268888 containerd[1479]: time="2025-02-13T15:38:41.268837381Z" level=info msg="StartContainer for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\"" Feb 13 15:38:41.299109 systemd[1]: Started cri-containerd-bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b.scope - libcontainer container bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b. Feb 13 15:38:41.331105 containerd[1479]: time="2025-02-13T15:38:41.330973085Z" level=info msg="StartContainer for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" returns successfully" Feb 13 15:38:42.011902 systemd[1]: sshd@1-78.46.147.231:22-39.99.212.219:42420.service: Deactivated successfully. Feb 13 15:38:42.323313 kubelet[2874]: I0213 15:38:42.322839 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mvkn4" podStartSLOduration=7.835689907 podStartE2EDuration="16.322815109s" podCreationTimestamp="2025-02-13 15:38:26 +0000 UTC" firstStartedPulling="2025-02-13 15:38:26.859020731 +0000 UTC m=+14.849964035" lastFinishedPulling="2025-02-13 15:38:35.346145893 +0000 UTC m=+23.337089237" observedRunningTime="2025-02-13 15:38:40.320003676 +0000 UTC m=+28.310947180" watchObservedRunningTime="2025-02-13 15:38:42.322815109 +0000 UTC m=+30.313758493" Feb 13 15:38:42.323313 kubelet[2874]: I0213 15:38:42.323170 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7pq5m" podStartSLOduration=1.974988419 podStartE2EDuration="16.323159631s" podCreationTimestamp="2025-02-13 15:38:26 +0000 UTC" firstStartedPulling="2025-02-13 15:38:26.891770988 +0000 UTC m=+14.882714332" lastFinishedPulling="2025-02-13 15:38:41.23994224 +0000 UTC m=+29.230885544" observedRunningTime="2025-02-13 15:38:42.322600428 +0000 UTC m=+30.313543772" watchObservedRunningTime="2025-02-13 15:38:42.323159631 +0000 UTC m=+30.314102975" Feb 13 15:38:44.672090 systemd-networkd[1369]: cilium_host: Link UP Feb 13 15:38:44.672338 systemd-networkd[1369]: cilium_net: Link UP Feb 13 15:38:44.672343 systemd-networkd[1369]: cilium_net: Gained carrier Feb 13 15:38:44.675301 systemd-networkd[1369]: cilium_host: Gained carrier Feb 13 15:38:44.676333 systemd-networkd[1369]: cilium_host: Gained IPv6LL Feb 13 15:38:44.822974 systemd-networkd[1369]: cilium_vxlan: Link UP Feb 13 15:38:44.822994 systemd-networkd[1369]: cilium_vxlan: Gained carrier Feb 13 15:38:45.066255 systemd-networkd[1369]: cilium_net: Gained IPv6LL Feb 13 15:38:45.137645 kernel: NET: Registered PF_ALG protocol family Feb 13 15:38:45.870369 systemd-networkd[1369]: lxc_health: Link UP Feb 13 15:38:45.887068 systemd-networkd[1369]: lxc_health: Gained carrier Feb 13 15:38:46.031655 systemd-networkd[1369]: lxc2e2f47bb49d0: Link UP Feb 13 15:38:46.035649 kernel: eth0: renamed from tmpad59a Feb 13 15:38:46.041174 systemd-networkd[1369]: lxc2e2f47bb49d0: Gained carrier Feb 13 15:38:46.049803 systemd-networkd[1369]: lxca7a717dc7907: Link UP Feb 13 15:38:46.057045 kernel: eth0: renamed from tmpee0a7 Feb 13 15:38:46.065383 systemd-networkd[1369]: lxca7a717dc7907: Gained carrier Feb 13 15:38:46.642117 systemd-networkd[1369]: cilium_vxlan: Gained IPv6LL Feb 13 15:38:47.154132 systemd-networkd[1369]: lxca7a717dc7907: Gained IPv6LL Feb 13 15:38:47.282126 systemd-networkd[1369]: lxc_health: Gained IPv6LL Feb 13 15:38:47.602960 systemd-networkd[1369]: lxc2e2f47bb49d0: Gained IPv6LL Feb 13 15:38:50.562961 containerd[1479]: time="2025-02-13T15:38:50.562779191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:50.562961 containerd[1479]: time="2025-02-13T15:38:50.562865071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:50.562961 containerd[1479]: time="2025-02-13T15:38:50.562883231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:50.564770 containerd[1479]: time="2025-02-13T15:38:50.563039072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:50.600747 systemd[1]: Started cri-containerd-ee0a7c7094cb6d1c66ef6ed4f647587009ef55613fb9f8c6150c4e2be7631966.scope - libcontainer container ee0a7c7094cb6d1c66ef6ed4f647587009ef55613fb9f8c6150c4e2be7631966. Feb 13 15:38:50.645191 containerd[1479]: time="2025-02-13T15:38:50.644977654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:50.645191 containerd[1479]: time="2025-02-13T15:38:50.645065015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:50.645521 containerd[1479]: time="2025-02-13T15:38:50.645418976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:50.646130 containerd[1479]: time="2025-02-13T15:38:50.645648937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:50.681762 systemd[1]: Started cri-containerd-ad59a36803e330c30b629fff25d46337a9bd2cbebee5e00b919b585bc2925eeb.scope - libcontainer container ad59a36803e330c30b629fff25d46337a9bd2cbebee5e00b919b585bc2925eeb. Feb 13 15:38:50.689879 containerd[1479]: time="2025-02-13T15:38:50.689828442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-twtjp,Uid:879fd5f1-a4a7-484b-9b3f-a2a99115c50f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0a7c7094cb6d1c66ef6ed4f647587009ef55613fb9f8c6150c4e2be7631966\"" Feb 13 15:38:50.699782 containerd[1479]: time="2025-02-13T15:38:50.699454002Z" level=info msg="CreateContainer within sandbox \"ee0a7c7094cb6d1c66ef6ed4f647587009ef55613fb9f8c6150c4e2be7631966\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:38:50.735007 containerd[1479]: time="2025-02-13T15:38:50.734914071Z" level=info msg="CreateContainer within sandbox \"ee0a7c7094cb6d1c66ef6ed4f647587009ef55613fb9f8c6150c4e2be7631966\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c062ad8ea700da6d19b91a3102083882c47e23253c93e23300ac98422d31395\"" Feb 13 15:38:50.739246 containerd[1479]: time="2025-02-13T15:38:50.738615766Z" level=info msg="StartContainer for \"2c062ad8ea700da6d19b91a3102083882c47e23253c93e23300ac98422d31395\"" Feb 13 15:38:50.771217 containerd[1479]: time="2025-02-13T15:38:50.770408299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmx9r,Uid:46d8298c-33c1-4db7-866e-801d44a71f1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad59a36803e330c30b629fff25d46337a9bd2cbebee5e00b919b585bc2925eeb\"" Feb 13 15:38:50.782335 containerd[1479]: time="2025-02-13T15:38:50.782012948Z" level=info msg="CreateContainer within sandbox \"ad59a36803e330c30b629fff25d46337a9bd2cbebee5e00b919b585bc2925eeb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:38:50.812739 systemd[1]: Started cri-containerd-2c062ad8ea700da6d19b91a3102083882c47e23253c93e23300ac98422d31395.scope - libcontainer container 2c062ad8ea700da6d19b91a3102083882c47e23253c93e23300ac98422d31395. Feb 13 15:38:50.816518 containerd[1479]: time="2025-02-13T15:38:50.816049050Z" level=info msg="CreateContainer within sandbox \"ad59a36803e330c30b629fff25d46337a9bd2cbebee5e00b919b585bc2925eeb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d07a7a3b0b95c65c843f9e894972f9d0efa050d108062d1326ce4ec2ca5855f\"" Feb 13 15:38:50.820210 containerd[1479]: time="2025-02-13T15:38:50.817567697Z" level=info msg="StartContainer for \"1d07a7a3b0b95c65c843f9e894972f9d0efa050d108062d1326ce4ec2ca5855f\"" Feb 13 15:38:50.867697 systemd[1]: Started cri-containerd-1d07a7a3b0b95c65c843f9e894972f9d0efa050d108062d1326ce4ec2ca5855f.scope - libcontainer container 1d07a7a3b0b95c65c843f9e894972f9d0efa050d108062d1326ce4ec2ca5855f. Feb 13 15:38:50.885979 containerd[1479]: time="2025-02-13T15:38:50.885899703Z" level=info msg="StartContainer for \"2c062ad8ea700da6d19b91a3102083882c47e23253c93e23300ac98422d31395\" returns successfully" Feb 13 15:38:50.929058 containerd[1479]: time="2025-02-13T15:38:50.928889042Z" level=info msg="StartContainer for \"1d07a7a3b0b95c65c843f9e894972f9d0efa050d108062d1326ce4ec2ca5855f\" returns successfully" Feb 13 15:38:51.357353 kubelet[2874]: I0213 15:38:51.357257 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-twtjp" podStartSLOduration=25.357224891 podStartE2EDuration="25.357224891s" podCreationTimestamp="2025-02-13 15:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:51.353993477 +0000 UTC m=+39.344936821" watchObservedRunningTime="2025-02-13 15:38:51.357224891 +0000 UTC m=+39.348168195" Feb 13 15:38:51.405828 kubelet[2874]: I0213 15:38:51.404869 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jmx9r" podStartSLOduration=25.404847807 podStartE2EDuration="25.404847807s" podCreationTimestamp="2025-02-13 15:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:51.379089301 +0000 UTC m=+39.370032645" watchObservedRunningTime="2025-02-13 15:38:51.404847807 +0000 UTC m=+39.395791151" Feb 13 15:38:57.373753 systemd[1]: Started sshd@15-78.46.147.231:22-101.126.78.108:45760.service - OpenSSH per-connection server daemon (101.126.78.108:45760). Feb 13 15:39:25.476594 systemd[1]: sshd@3-78.46.147.231:22-39.99.212.219:57614.service: Deactivated successfully. Feb 13 15:39:28.779075 systemd[1]: sshd@4-78.46.147.231:22-183.63.103.84:14697.service: Deactivated successfully. Feb 13 15:39:44.287204 systemd[1]: Started sshd@16-78.46.147.231:22-183.63.103.84:64900.service - OpenSSH per-connection server daemon (183.63.103.84:64900). Feb 13 15:39:51.180879 systemd[1]: Started sshd@17-78.46.147.231:22-101.126.78.108:54488.service - OpenSSH per-connection server daemon (101.126.78.108:54488). Feb 13 15:39:54.816944 systemd[1]: sshd@12-78.46.147.231:22-39.99.212.219:38460.service: Deactivated successfully. Feb 13 15:40:06.356013 systemd[1]: sshd@13-78.46.147.231:22-101.126.78.108:36858.service: Deactivated successfully. Feb 13 15:40:35.480209 systemd[1]: sshd@14-78.46.147.231:22-183.63.103.84:58206.service: Deactivated successfully. Feb 13 15:40:44.736892 systemd[1]: Started sshd@18-78.46.147.231:22-101.126.78.108:63374.service - OpenSSH per-connection server daemon (101.126.78.108:63374). Feb 13 15:40:53.269008 systemd[1]: Started sshd@19-78.46.147.231:22-183.63.103.84:47795.service - OpenSSH per-connection server daemon (183.63.103.84:47795). Feb 13 15:40:55.988849 sshd[4290]: Invalid user zookeeper from 183.63.103.84 port 47795 Feb 13 15:40:56.267982 sshd[4290]: Received disconnect from 183.63.103.84 port 47795:11: Bye Bye [preauth] Feb 13 15:40:56.267982 sshd[4290]: Disconnected from invalid user zookeeper 183.63.103.84 port 47795 [preauth] Feb 13 15:40:56.270503 systemd[1]: sshd@19-78.46.147.231:22-183.63.103.84:47795.service: Deactivated successfully. Feb 13 15:40:57.395708 systemd[1]: sshd@15-78.46.147.231:22-101.126.78.108:45760.service: Deactivated successfully. Feb 13 15:41:35.502594 update_engine[1456]: I20250213 15:41:35.501633 1456 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:41:35.502594 update_engine[1456]: I20250213 15:41:35.501708 1456 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:41:35.502594 update_engine[1456]: I20250213 15:41:35.502146 1456 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:41:35.503195 update_engine[1456]: I20250213 15:41:35.503100 1456 omaha_request_params.cc:62] Current group set to stable Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503381 1456 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503412 1456 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503460 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503521 1456 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503679 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503698 1456 omaha_request_action.cc:272] Request: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: Feb 13 15:41:35.504071 update_engine[1456]: I20250213 15:41:35.503715 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:41:35.504760 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:41:35.505807 update_engine[1456]: I20250213 15:41:35.505743 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:41:35.506266 update_engine[1456]: I20250213 15:41:35.506197 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:41:35.507073 update_engine[1456]: E20250213 15:41:35.507001 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:41:35.507209 update_engine[1456]: I20250213 15:41:35.507081 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:41:38.117947 systemd[1]: Started sshd@20-78.46.147.231:22-101.126.78.108:18550.service - OpenSSH per-connection server daemon (101.126.78.108:18550). Feb 13 15:41:44.308899 systemd[1]: sshd@16-78.46.147.231:22-183.63.103.84:64900.service: Deactivated successfully. Feb 13 15:41:45.412836 update_engine[1456]: I20250213 15:41:45.412479 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:41:45.412836 update_engine[1456]: I20250213 15:41:45.412799 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:41:45.413290 update_engine[1456]: I20250213 15:41:45.413074 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:41:45.414811 update_engine[1456]: E20250213 15:41:45.414603 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:41:45.415108 update_engine[1456]: I20250213 15:41:45.414979 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:41:51.213159 systemd[1]: sshd@17-78.46.147.231:22-101.126.78.108:54488.service: Deactivated successfully. Feb 13 15:41:55.412511 update_engine[1456]: I20250213 15:41:55.412333 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:41:55.413046 update_engine[1456]: I20250213 15:41:55.412685 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:41:55.413098 update_engine[1456]: I20250213 15:41:55.413038 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:41:55.413634 update_engine[1456]: E20250213 15:41:55.413517 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:41:55.413713 update_engine[1456]: I20250213 15:41:55.413667 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:42:02.476046 systemd[1]: Started sshd@21-78.46.147.231:22-183.63.103.84:7949.service - OpenSSH per-connection server daemon (183.63.103.84:7949). Feb 13 15:42:05.411710 update_engine[1456]: I20250213 15:42:05.410859 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:42:05.411710 update_engine[1456]: I20250213 15:42:05.411242 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:42:05.411710 update_engine[1456]: I20250213 15:42:05.411619 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:42:05.412678 update_engine[1456]: E20250213 15:42:05.412597 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:42:05.412678 update_engine[1456]: I20250213 15:42:05.412670 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:42:05.412678 update_engine[1456]: I20250213 15:42:05.412683 1456 omaha_request_action.cc:617] Omaha request response: Feb 13 15:42:05.412900 update_engine[1456]: E20250213 15:42:05.412809 1456 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:42:05.412900 update_engine[1456]: I20250213 15:42:05.412832 1456 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:42:05.412900 update_engine[1456]: I20250213 15:42:05.412838 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:42:05.412900 update_engine[1456]: I20250213 15:42:05.412843 1456 update_attempter.cc:306] Processing Done. Feb 13 15:42:05.412900 update_engine[1456]: E20250213 15:42:05.412861 1456 update_attempter.cc:619] Update failed. Feb 13 15:42:05.412900 update_engine[1456]: I20250213 15:42:05.412868 1456 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:42:05.412900 update_engine[1456]: I20250213 15:42:05.412873 1456 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:42:05.412900 update_engine[1456]: I20250213 15:42:05.412880 1456 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:42:05.413269 update_engine[1456]: I20250213 15:42:05.412958 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:42:05.413269 update_engine[1456]: I20250213 15:42:05.413041 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:42:05.413269 update_engine[1456]: I20250213 15:42:05.413056 1456 omaha_request_action.cc:272] Request: Feb 13 15:42:05.413269 update_engine[1456]: Feb 13 15:42:05.413269 update_engine[1456]: Feb 13 15:42:05.413269 update_engine[1456]: Feb 13 15:42:05.413269 update_engine[1456]: Feb 13 15:42:05.413269 update_engine[1456]: Feb 13 15:42:05.413269 update_engine[1456]: Feb 13 15:42:05.413269 update_engine[1456]: I20250213 15:42:05.413064 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:42:05.413685 update_engine[1456]: I20250213 15:42:05.413375 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:42:05.413746 update_engine[1456]: I20250213 15:42:05.413673 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:42:05.414269 update_engine[1456]: E20250213 15:42:05.414001 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414184 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414202 1456 omaha_request_action.cc:617] Omaha request response: Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414211 1456 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414215 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414220 1456 update_attempter.cc:306] Processing Done. Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414227 1456 update_attempter.cc:310] Error event sent. Feb 13 15:42:05.414269 update_engine[1456]: I20250213 15:42:05.414237 1456 update_check_scheduler.cc:74] Next update check in 44m8s Feb 13 15:42:05.414536 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:42:05.415108 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:42:31.502902 systemd[1]: Started sshd@22-78.46.147.231:22-101.126.78.108:27618.service - OpenSSH per-connection server daemon (101.126.78.108:27618). Feb 13 15:42:44.763370 systemd[1]: sshd@18-78.46.147.231:22-101.126.78.108:63374.service: Deactivated successfully. Feb 13 15:43:00.603864 systemd[1]: Started sshd@23-78.46.147.231:22-139.178.89.65:58642.service - OpenSSH per-connection server daemon (139.178.89.65:58642). Feb 13 15:43:01.602481 sshd[4326]: Accepted publickey for core from 139.178.89.65 port 58642 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:01.609177 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:01.627920 systemd-logind[1455]: New session 8 of user core. Feb 13 15:43:01.632789 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:43:02.397359 sshd[4328]: Connection closed by 139.178.89.65 port 58642 Feb 13 15:43:02.398882 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:02.406797 systemd[1]: sshd@23-78.46.147.231:22-139.178.89.65:58642.service: Deactivated successfully. Feb 13 15:43:02.410627 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:43:02.411701 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:43:02.412975 systemd-logind[1455]: Removed session 8. Feb 13 15:43:07.572896 systemd[1]: Started sshd@24-78.46.147.231:22-139.178.89.65:58540.service - OpenSSH per-connection server daemon (139.178.89.65:58540). Feb 13 15:43:08.563669 sshd[4340]: Accepted publickey for core from 139.178.89.65 port 58540 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:08.565332 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:08.573878 systemd-logind[1455]: New session 9 of user core. Feb 13 15:43:08.583746 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:43:09.318805 sshd[4342]: Connection closed by 139.178.89.65 port 58540 Feb 13 15:43:09.320673 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:09.326095 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:43:09.327847 systemd[1]: sshd@24-78.46.147.231:22-139.178.89.65:58540.service: Deactivated successfully. Feb 13 15:43:09.330891 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:43:09.336206 systemd-logind[1455]: Removed session 9. Feb 13 15:43:11.448923 systemd[1]: Started sshd@25-78.46.147.231:22-183.63.103.84:14917.service - OpenSSH per-connection server daemon (183.63.103.84:14917). Feb 13 15:43:14.499107 systemd[1]: Started sshd@26-78.46.147.231:22-139.178.89.65:58544.service - OpenSSH per-connection server daemon (139.178.89.65:58544). Feb 13 15:43:15.506228 sshd[4360]: Accepted publickey for core from 139.178.89.65 port 58544 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:15.509036 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:15.515575 systemd-logind[1455]: New session 10 of user core. Feb 13 15:43:15.528727 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:43:16.273968 sshd[4362]: Connection closed by 139.178.89.65 port 58544 Feb 13 15:43:16.274710 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:16.280197 systemd[1]: sshd@26-78.46.147.231:22-139.178.89.65:58544.service: Deactivated successfully. Feb 13 15:43:16.285975 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:43:16.288179 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:43:16.289629 systemd-logind[1455]: Removed session 10. Feb 13 15:43:16.453048 systemd[1]: Started sshd@27-78.46.147.231:22-139.178.89.65:52482.service - OpenSSH per-connection server daemon (139.178.89.65:52482). Feb 13 15:43:17.440282 sshd[4374]: Accepted publickey for core from 139.178.89.65 port 52482 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:17.443673 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:17.453006 systemd-logind[1455]: New session 11 of user core. Feb 13 15:43:17.458856 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:43:18.245867 sshd[4376]: Connection closed by 139.178.89.65 port 52482 Feb 13 15:43:18.246704 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:18.251791 systemd[1]: sshd@27-78.46.147.231:22-139.178.89.65:52482.service: Deactivated successfully. Feb 13 15:43:18.254532 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:43:18.255536 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:43:18.257567 systemd-logind[1455]: Removed session 11. Feb 13 15:43:18.426982 systemd[1]: Started sshd@28-78.46.147.231:22-139.178.89.65:52484.service - OpenSSH per-connection server daemon (139.178.89.65:52484). Feb 13 15:43:19.406604 sshd[4385]: Accepted publickey for core from 139.178.89.65 port 52484 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:19.408725 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:19.415672 systemd-logind[1455]: New session 12 of user core. Feb 13 15:43:19.422797 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:43:20.156421 sshd[4387]: Connection closed by 139.178.89.65 port 52484 Feb 13 15:43:20.157982 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:20.166876 systemd[1]: sshd@28-78.46.147.231:22-139.178.89.65:52484.service: Deactivated successfully. Feb 13 15:43:20.167279 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:43:20.171164 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:43:20.175141 systemd-logind[1455]: Removed session 12. Feb 13 15:43:24.171888 systemd[1]: Started sshd@29-78.46.147.231:22-101.126.78.108:36140.service - OpenSSH per-connection server daemon (101.126.78.108:36140). Feb 13 15:43:25.332213 systemd[1]: Started sshd@30-78.46.147.231:22-139.178.89.65:53790.service - OpenSSH per-connection server daemon (139.178.89.65:53790). Feb 13 15:43:26.308025 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 53790 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:26.310172 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:26.316158 systemd-logind[1455]: New session 13 of user core. Feb 13 15:43:26.322735 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:43:27.061270 sshd[4402]: Connection closed by 139.178.89.65 port 53790 Feb 13 15:43:27.062848 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:27.070902 systemd[1]: sshd@30-78.46.147.231:22-139.178.89.65:53790.service: Deactivated successfully. Feb 13 15:43:27.074894 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:43:27.076349 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:43:27.077956 systemd-logind[1455]: Removed session 13. Feb 13 15:43:27.240100 systemd[1]: Started sshd@31-78.46.147.231:22-139.178.89.65:53796.service - OpenSSH per-connection server daemon (139.178.89.65:53796). Feb 13 15:43:28.226955 sshd[4416]: Accepted publickey for core from 139.178.89.65 port 53796 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:28.229085 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:28.234611 systemd-logind[1455]: New session 14 of user core. Feb 13 15:43:28.238693 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:43:29.033555 sshd[4418]: Connection closed by 139.178.89.65 port 53796 Feb 13 15:43:29.034629 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:29.040032 systemd[1]: sshd@31-78.46.147.231:22-139.178.89.65:53796.service: Deactivated successfully. Feb 13 15:43:29.042285 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:43:29.043726 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:43:29.044928 systemd-logind[1455]: Removed session 14. Feb 13 15:43:29.209908 systemd[1]: Started sshd@32-78.46.147.231:22-139.178.89.65:53802.service - OpenSSH per-connection server daemon (139.178.89.65:53802). Feb 13 15:43:30.209254 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 53802 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:30.212178 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:30.218778 systemd-logind[1455]: New session 15 of user core. Feb 13 15:43:30.225590 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:43:32.618620 sshd[4428]: Connection closed by 139.178.89.65 port 53802 Feb 13 15:43:32.620917 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:32.625276 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:43:32.626130 systemd[1]: sshd@32-78.46.147.231:22-139.178.89.65:53802.service: Deactivated successfully. Feb 13 15:43:32.629643 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:43:32.632115 systemd-logind[1455]: Removed session 15. Feb 13 15:43:32.798150 systemd[1]: Started sshd@33-78.46.147.231:22-139.178.89.65:53816.service - OpenSSH per-connection server daemon (139.178.89.65:53816). Feb 13 15:43:33.794757 sshd[4444]: Accepted publickey for core from 139.178.89.65 port 53816 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:33.797980 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:33.804716 systemd-logind[1455]: New session 16 of user core. Feb 13 15:43:33.808663 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:43:34.696639 sshd[4446]: Connection closed by 139.178.89.65 port 53816 Feb 13 15:43:34.698313 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:34.708271 systemd[1]: sshd@33-78.46.147.231:22-139.178.89.65:53816.service: Deactivated successfully. Feb 13 15:43:34.713088 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:43:34.721291 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:43:34.728979 systemd-logind[1455]: Removed session 16. Feb 13 15:43:34.869202 systemd[1]: Started sshd@34-78.46.147.231:22-139.178.89.65:34666.service - OpenSSH per-connection server daemon (139.178.89.65:34666). Feb 13 15:43:35.883054 sshd[4455]: Accepted publickey for core from 139.178.89.65 port 34666 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:35.885559 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:35.891536 systemd-logind[1455]: New session 17 of user core. Feb 13 15:43:35.898696 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:43:36.653852 sshd[4457]: Connection closed by 139.178.89.65 port 34666 Feb 13 15:43:36.654999 sshd-session[4455]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:36.660799 systemd[1]: sshd@34-78.46.147.231:22-139.178.89.65:34666.service: Deactivated successfully. Feb 13 15:43:36.663211 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:43:36.665283 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:43:36.666870 systemd-logind[1455]: Removed session 17. Feb 13 15:43:38.144150 systemd[1]: sshd@20-78.46.147.231:22-101.126.78.108:18550.service: Deactivated successfully. Feb 13 15:43:41.830150 systemd[1]: Started sshd@35-78.46.147.231:22-139.178.89.65:34668.service - OpenSSH per-connection server daemon (139.178.89.65:34668). Feb 13 15:43:42.821829 sshd[4472]: Accepted publickey for core from 139.178.89.65 port 34668 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:42.823766 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:42.831762 systemd-logind[1455]: New session 18 of user core. Feb 13 15:43:42.836711 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:43:43.580010 sshd[4474]: Connection closed by 139.178.89.65 port 34668 Feb 13 15:43:43.579833 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:43.585810 systemd[1]: sshd@35-78.46.147.231:22-139.178.89.65:34668.service: Deactivated successfully. Feb 13 15:43:43.589716 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:43:43.591773 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:43:43.593098 systemd-logind[1455]: Removed session 18. Feb 13 15:43:48.750934 systemd[1]: Started sshd@36-78.46.147.231:22-139.178.89.65:48110.service - OpenSSH per-connection server daemon (139.178.89.65:48110). Feb 13 15:43:49.728596 sshd[4485]: Accepted publickey for core from 139.178.89.65 port 48110 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:49.733159 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:49.747226 systemd-logind[1455]: New session 19 of user core. Feb 13 15:43:49.755688 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:43:50.482134 sshd[4487]: Connection closed by 139.178.89.65 port 48110 Feb 13 15:43:50.483162 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:50.488351 systemd[1]: sshd@36-78.46.147.231:22-139.178.89.65:48110.service: Deactivated successfully. Feb 13 15:43:50.492049 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:43:50.493580 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:43:50.496175 systemd-logind[1455]: Removed session 19. Feb 13 15:43:50.664966 systemd[1]: Started sshd@37-78.46.147.231:22-139.178.89.65:48126.service - OpenSSH per-connection server daemon (139.178.89.65:48126). Feb 13 15:43:51.651060 sshd[4497]: Accepted publickey for core from 139.178.89.65 port 48126 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:51.653879 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:51.660827 systemd-logind[1455]: New session 20 of user core. Feb 13 15:43:51.667721 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:43:54.298704 containerd[1479]: time="2025-02-13T15:43:54.298652933Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:43:54.312048 containerd[1479]: time="2025-02-13T15:43:54.312009223Z" level=info msg="StopContainer for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" with timeout 2 (s)" Feb 13 15:43:54.312744 containerd[1479]: time="2025-02-13T15:43:54.312655706Z" level=info msg="Stop container \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" with signal terminated" Feb 13 15:43:54.320914 containerd[1479]: time="2025-02-13T15:43:54.320867777Z" level=info msg="StopContainer for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" with timeout 30 (s)" Feb 13 15:43:54.323114 containerd[1479]: time="2025-02-13T15:43:54.321791340Z" level=info msg="Stop container \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" with signal terminated" Feb 13 15:43:54.333568 systemd-networkd[1369]: lxc_health: Link DOWN Feb 13 15:43:54.333575 systemd-networkd[1369]: lxc_health: Lost carrier Feb 13 15:43:54.365153 systemd[1]: cri-containerd-ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c.scope: Deactivated successfully. Feb 13 15:43:54.365430 systemd[1]: cri-containerd-ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c.scope: Consumed 8.607s CPU time. Feb 13 15:43:54.396607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c-rootfs.mount: Deactivated successfully. Feb 13 15:43:54.405180 systemd[1]: cri-containerd-bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b.scope: Deactivated successfully. Feb 13 15:43:54.411512 containerd[1479]: time="2025-02-13T15:43:54.410995475Z" level=info msg="shim disconnected" id=ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c namespace=k8s.io Feb 13 15:43:54.411512 containerd[1479]: time="2025-02-13T15:43:54.411055195Z" level=warning msg="cleaning up after shim disconnected" id=ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c namespace=k8s.io Feb 13 15:43:54.411512 containerd[1479]: time="2025-02-13T15:43:54.411063235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:54.444604 containerd[1479]: time="2025-02-13T15:43:54.444547321Z" level=info msg="StopContainer for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" returns successfully" Feb 13 15:43:54.445927 containerd[1479]: time="2025-02-13T15:43:54.445738085Z" level=info msg="StopPodSandbox for \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\"" Feb 13 15:43:54.445927 containerd[1479]: time="2025-02-13T15:43:54.445824606Z" level=info msg="Container to stop \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:54.445927 containerd[1479]: time="2025-02-13T15:43:54.445845486Z" level=info msg="Container to stop \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:54.445927 containerd[1479]: time="2025-02-13T15:43:54.445859366Z" level=info msg="Container to stop \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:54.445927 containerd[1479]: time="2025-02-13T15:43:54.445872966Z" level=info msg="Container to stop \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:54.445927 containerd[1479]: time="2025-02-13T15:43:54.445885966Z" level=info msg="Container to stop \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:54.449278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207-shm.mount: Deactivated successfully. Feb 13 15:43:54.458972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b-rootfs.mount: Deactivated successfully. Feb 13 15:43:54.463107 systemd[1]: cri-containerd-71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207.scope: Deactivated successfully. Feb 13 15:43:54.466834 containerd[1479]: time="2025-02-13T15:43:54.466298403Z" level=info msg="shim disconnected" id=bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b namespace=k8s.io Feb 13 15:43:54.466834 containerd[1479]: time="2025-02-13T15:43:54.466628364Z" level=warning msg="cleaning up after shim disconnected" id=bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b namespace=k8s.io Feb 13 15:43:54.466834 containerd[1479]: time="2025-02-13T15:43:54.466641484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:54.484351 containerd[1479]: time="2025-02-13T15:43:54.484279030Z" level=info msg="StopContainer for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" returns successfully" Feb 13 15:43:54.486429 containerd[1479]: time="2025-02-13T15:43:54.486393598Z" level=info msg="StopPodSandbox for \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\"" Feb 13 15:43:54.486679 containerd[1479]: time="2025-02-13T15:43:54.486644199Z" level=info msg="Container to stop \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:54.489206 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f-shm.mount: Deactivated successfully. Feb 13 15:43:54.502169 systemd[1]: cri-containerd-070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f.scope: Deactivated successfully. Feb 13 15:43:54.517525 containerd[1479]: time="2025-02-13T15:43:54.517430715Z" level=info msg="shim disconnected" id=71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207 namespace=k8s.io Feb 13 15:43:54.517525 containerd[1479]: time="2025-02-13T15:43:54.517519115Z" level=warning msg="cleaning up after shim disconnected" id=71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207 namespace=k8s.io Feb 13 15:43:54.517525 containerd[1479]: time="2025-02-13T15:43:54.517533835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:54.540349 containerd[1479]: time="2025-02-13T15:43:54.540277960Z" level=info msg="TearDown network for sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" successfully" Feb 13 15:43:54.540349 containerd[1479]: time="2025-02-13T15:43:54.540328400Z" level=info msg="StopPodSandbox for \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" returns successfully" Feb 13 15:43:54.544879 containerd[1479]: time="2025-02-13T15:43:54.543620373Z" level=info msg="shim disconnected" id=070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f namespace=k8s.io Feb 13 15:43:54.544879 containerd[1479]: time="2025-02-13T15:43:54.543681973Z" level=warning msg="cleaning up after shim disconnected" id=070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f namespace=k8s.io Feb 13 15:43:54.544879 containerd[1479]: time="2025-02-13T15:43:54.543690893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:54.572419 containerd[1479]: time="2025-02-13T15:43:54.572212280Z" level=info msg="TearDown network for sandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" successfully" Feb 13 15:43:54.574469 containerd[1479]: time="2025-02-13T15:43:54.573699126Z" level=info msg="StopPodSandbox for \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" returns successfully" Feb 13 15:43:54.672580 kubelet[2874]: I0213 15:43:54.672371 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-hostproc\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.672580 kubelet[2874]: I0213 15:43:54.672492 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.672580 kubelet[2874]: I0213 15:43:54.672583 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-net\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.675642 kubelet[2874]: I0213 15:43:54.672630 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-run\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.675642 kubelet[2874]: I0213 15:43:54.672656 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.675642 kubelet[2874]: I0213 15:43:54.672668 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-lib-modules\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.675642 kubelet[2874]: I0213 15:43:54.672786 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-bpf-maps\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.675642 kubelet[2874]: I0213 15:43:54.672834 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-config-path\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.675642 kubelet[2874]: I0213 15:43:54.672871 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cni-path\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676000 kubelet[2874]: I0213 15:43:54.672900 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-kernel\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676000 kubelet[2874]: I0213 15:43:54.672941 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-xtables-lock\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676000 kubelet[2874]: I0213 15:43:54.672983 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5811e8e-9422-48f1-9fb5-b8967311d069-clustermesh-secrets\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676000 kubelet[2874]: I0213 15:43:54.673020 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jbh\" (UniqueName: \"kubernetes.io/projected/5c424093-db8f-4e53-8928-ed9369b8ba7f-kube-api-access-98jbh\") pod \"5c424093-db8f-4e53-8928-ed9369b8ba7f\" (UID: \"5c424093-db8f-4e53-8928-ed9369b8ba7f\") " Feb 13 15:43:54.676000 kubelet[2874]: I0213 15:43:54.673098 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c424093-db8f-4e53-8928-ed9369b8ba7f-cilium-config-path\") pod \"5c424093-db8f-4e53-8928-ed9369b8ba7f\" (UID: \"5c424093-db8f-4e53-8928-ed9369b8ba7f\") " Feb 13 15:43:54.676000 kubelet[2874]: I0213 15:43:54.673138 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-etc-cni-netd\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676179 kubelet[2874]: I0213 15:43:54.673178 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbxc\" (UniqueName: \"kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-kube-api-access-jrbxc\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676179 kubelet[2874]: I0213 15:43:54.673215 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-cgroup\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676179 kubelet[2874]: I0213 15:43:54.673256 2874 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-hubble-tls\") pod \"d5811e8e-9422-48f1-9fb5-b8967311d069\" (UID: \"d5811e8e-9422-48f1-9fb5-b8967311d069\") " Feb 13 15:43:54.676179 kubelet[2874]: I0213 15:43:54.673425 2874 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-net\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.676179 kubelet[2874]: I0213 15:43:54.673481 2874 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-hostproc\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.676179 kubelet[2874]: I0213 15:43:54.672693 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.676357 kubelet[2874]: I0213 15:43:54.672718 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.676357 kubelet[2874]: I0213 15:43:54.673541 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.676357 kubelet[2874]: I0213 15:43:54.673641 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.678489 kubelet[2874]: I0213 15:43:54.678421 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.678698 kubelet[2874]: I0213 15:43:54.678679 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.679954 kubelet[2874]: I0213 15:43:54.679910 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.682992 kubelet[2874]: I0213 15:43:54.682942 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:54.683118 kubelet[2874]: I0213 15:43:54.683041 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:54.683190 kubelet[2874]: I0213 15:43:54.683162 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5811e8e-9422-48f1-9fb5-b8967311d069-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:43:54.683979 kubelet[2874]: I0213 15:43:54.683937 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-kube-api-access-jrbxc" (OuterVolumeSpecName: "kube-api-access-jrbxc") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "kube-api-access-jrbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:54.684115 kubelet[2874]: I0213 15:43:54.684091 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c424093-db8f-4e53-8928-ed9369b8ba7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c424093-db8f-4e53-8928-ed9369b8ba7f" (UID: "5c424093-db8f-4e53-8928-ed9369b8ba7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:54.684226 kubelet[2874]: I0213 15:43:54.684200 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5811e8e-9422-48f1-9fb5-b8967311d069" (UID: "d5811e8e-9422-48f1-9fb5-b8967311d069"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:54.685465 kubelet[2874]: I0213 15:43:54.685398 2874 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c424093-db8f-4e53-8928-ed9369b8ba7f-kube-api-access-98jbh" (OuterVolumeSpecName: "kube-api-access-98jbh") pod "5c424093-db8f-4e53-8928-ed9369b8ba7f" (UID: "5c424093-db8f-4e53-8928-ed9369b8ba7f"). InnerVolumeSpecName "kube-api-access-98jbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:54.773801 kubelet[2874]: I0213 15:43:54.773702 2874 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5811e8e-9422-48f1-9fb5-b8967311d069-clustermesh-secrets\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.773801 kubelet[2874]: I0213 15:43:54.773758 2874 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-host-proc-sys-kernel\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.773801 kubelet[2874]: I0213 15:43:54.773784 2874 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-xtables-lock\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.773801 kubelet[2874]: I0213 15:43:54.773804 2874 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-98jbh\" (UniqueName: \"kubernetes.io/projected/5c424093-db8f-4e53-8928-ed9369b8ba7f-kube-api-access-98jbh\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773822 2874 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c424093-db8f-4e53-8928-ed9369b8ba7f-cilium-config-path\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773840 2874 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-etc-cni-netd\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773859 2874 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jrbxc\" (UniqueName: \"kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-kube-api-access-jrbxc\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773877 2874 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5811e8e-9422-48f1-9fb5-b8967311d069-hubble-tls\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773894 2874 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-cgroup\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773912 2874 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-run\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773928 2874 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-lib-modules\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774330 kubelet[2874]: I0213 15:43:54.773944 2874 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-cni-path\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774842 kubelet[2874]: I0213 15:43:54.773960 2874 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5811e8e-9422-48f1-9fb5-b8967311d069-bpf-maps\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:54.774842 kubelet[2874]: I0213 15:43:54.773976 2874 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5811e8e-9422-48f1-9fb5-b8967311d069-cilium-config-path\") on node \"ci-4152-2-1-1-287b7b51cc\" DevicePath \"\"" Feb 13 15:43:55.191471 kubelet[2874]: I0213 15:43:55.189960 2874 scope.go:117] "RemoveContainer" containerID="ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c" Feb 13 15:43:55.193821 containerd[1479]: time="2025-02-13T15:43:55.193720572Z" level=info msg="RemoveContainer for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\"" Feb 13 15:43:55.203044 containerd[1479]: time="2025-02-13T15:43:55.202997527Z" level=info msg="RemoveContainer for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" returns successfully" Feb 13 15:43:55.206477 systemd[1]: Removed slice kubepods-burstable-podd5811e8e_9422_48f1_9fb5_b8967311d069.slice - libcontainer container kubepods-burstable-podd5811e8e_9422_48f1_9fb5_b8967311d069.slice. Feb 13 15:43:55.206590 systemd[1]: kubepods-burstable-podd5811e8e_9422_48f1_9fb5_b8967311d069.slice: Consumed 8.706s CPU time. Feb 13 15:43:55.207926 kubelet[2874]: I0213 15:43:55.207377 2874 scope.go:117] "RemoveContainer" containerID="5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c" Feb 13 15:43:55.211961 systemd[1]: Removed slice kubepods-besteffort-pod5c424093_db8f_4e53_8928_ed9369b8ba7f.slice - libcontainer container kubepods-besteffort-pod5c424093_db8f_4e53_8928_ed9369b8ba7f.slice. Feb 13 15:43:55.214488 containerd[1479]: time="2025-02-13T15:43:55.214146809Z" level=info msg="RemoveContainer for \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\"" Feb 13 15:43:55.218906 containerd[1479]: time="2025-02-13T15:43:55.218844907Z" level=info msg="RemoveContainer for \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\" returns successfully" Feb 13 15:43:55.219345 kubelet[2874]: I0213 15:43:55.219302 2874 scope.go:117] "RemoveContainer" containerID="f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035" Feb 13 15:43:55.222227 containerd[1479]: time="2025-02-13T15:43:55.221992838Z" level=info msg="RemoveContainer for \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\"" Feb 13 15:43:55.225209 containerd[1479]: time="2025-02-13T15:43:55.225164690Z" level=info msg="RemoveContainer for \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\" returns successfully" Feb 13 15:43:55.225537 kubelet[2874]: I0213 15:43:55.225506 2874 scope.go:117] "RemoveContainer" containerID="ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51" Feb 13 15:43:55.226987 containerd[1479]: time="2025-02-13T15:43:55.226956177Z" level=info msg="RemoveContainer for \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\"" Feb 13 15:43:55.230550 containerd[1479]: time="2025-02-13T15:43:55.230392590Z" level=info msg="RemoveContainer for \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\" returns successfully" Feb 13 15:43:55.231013 kubelet[2874]: I0213 15:43:55.230756 2874 scope.go:117] "RemoveContainer" containerID="934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4" Feb 13 15:43:55.233826 containerd[1479]: time="2025-02-13T15:43:55.233794203Z" level=info msg="RemoveContainer for \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\"" Feb 13 15:43:55.239467 containerd[1479]: time="2025-02-13T15:43:55.239385504Z" level=info msg="RemoveContainer for \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\" returns successfully" Feb 13 15:43:55.240563 kubelet[2874]: I0213 15:43:55.240055 2874 scope.go:117] "RemoveContainer" containerID="ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c" Feb 13 15:43:55.240876 containerd[1479]: time="2025-02-13T15:43:55.240822349Z" level=error msg="ContainerStatus for \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\": not found" Feb 13 15:43:55.243044 kubelet[2874]: E0213 15:43:55.242414 2874 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\": not found" containerID="ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c" Feb 13 15:43:55.243044 kubelet[2874]: I0213 15:43:55.242494 2874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c"} err="failed to get container status \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef079e16be47a6b42ea1e6b221e89f766489148b61261a5b698138df714fe23c\": not found" Feb 13 15:43:55.243044 kubelet[2874]: I0213 15:43:55.242574 2874 scope.go:117] "RemoveContainer" containerID="5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c" Feb 13 15:43:55.244357 containerd[1479]: time="2025-02-13T15:43:55.244289282Z" level=error msg="ContainerStatus for \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\": not found" Feb 13 15:43:55.245667 kubelet[2874]: E0213 15:43:55.244810 2874 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\": not found" containerID="5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c" Feb 13 15:43:55.245667 kubelet[2874]: I0213 15:43:55.244850 2874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c"} err="failed to get container status \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e663235b90f6897c94af72c7613a163784a058cba0d3ff2649c7ac512c8c03c\": not found" Feb 13 15:43:55.245667 kubelet[2874]: I0213 15:43:55.244872 2874 scope.go:117] "RemoveContainer" containerID="f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035" Feb 13 15:43:55.245829 containerd[1479]: time="2025-02-13T15:43:55.245189845Z" level=error msg="ContainerStatus for \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\": not found" Feb 13 15:43:55.246059 kubelet[2874]: E0213 15:43:55.245941 2874 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\": not found" containerID="f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035" Feb 13 15:43:55.246059 kubelet[2874]: I0213 15:43:55.245967 2874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035"} err="failed to get container status \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6ec2dab3571ccf4dcbfbf199398050b24ceed9c5f091f7d4fa2d808edf8f035\": not found" Feb 13 15:43:55.246059 kubelet[2874]: I0213 15:43:55.245985 2874 scope.go:117] "RemoveContainer" containerID="ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51" Feb 13 15:43:55.246531 containerd[1479]: time="2025-02-13T15:43:55.246483690Z" level=error msg="ContainerStatus for \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\": not found" Feb 13 15:43:55.247210 containerd[1479]: time="2025-02-13T15:43:55.246843772Z" level=error msg="ContainerStatus for \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\": not found" Feb 13 15:43:55.247265 kubelet[2874]: E0213 15:43:55.246623 2874 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\": not found" containerID="ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51" Feb 13 15:43:55.247265 kubelet[2874]: I0213 15:43:55.246646 2874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51"} err="failed to get container status \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef0f5a8b355bf13a9b470e5868597b62509d45b58fea027865ed68679d1eea51\": not found" Feb 13 15:43:55.247265 kubelet[2874]: I0213 15:43:55.246666 2874 scope.go:117] "RemoveContainer" containerID="934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4" Feb 13 15:43:55.247626 kubelet[2874]: E0213 15:43:55.247527 2874 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\": not found" containerID="934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4" Feb 13 15:43:55.247626 kubelet[2874]: I0213 15:43:55.247571 2874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4"} err="failed to get container status \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"934f1008aa5fdec67dcdfa633631ae54a6c6bba839bf497e2ce75db95fbda8a4\": not found" Feb 13 15:43:55.247626 kubelet[2874]: I0213 15:43:55.247587 2874 scope.go:117] "RemoveContainer" containerID="bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b" Feb 13 15:43:55.249319 containerd[1479]: time="2025-02-13T15:43:55.249283141Z" level=info msg="RemoveContainer for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\"" Feb 13 15:43:55.254456 containerd[1479]: time="2025-02-13T15:43:55.252888554Z" level=info msg="RemoveContainer for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" returns successfully" Feb 13 15:43:55.254586 kubelet[2874]: I0213 15:43:55.253122 2874 scope.go:117] "RemoveContainer" containerID="bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b" Feb 13 15:43:55.254930 containerd[1479]: time="2025-02-13T15:43:55.254807521Z" level=error msg="ContainerStatus for \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\": not found" Feb 13 15:43:55.255171 kubelet[2874]: E0213 15:43:55.255110 2874 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\": not found" containerID="bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b" Feb 13 15:43:55.255171 kubelet[2874]: I0213 15:43:55.255149 2874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b"} err="failed to get container status \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd079279c0fb99ac880b59915127e79a8cf2de3b6bbb662722872f5a6e17b56b\": not found" Feb 13 15:43:55.280602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f-rootfs.mount: Deactivated successfully. Feb 13 15:43:55.280712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207-rootfs.mount: Deactivated successfully. Feb 13 15:43:55.280774 systemd[1]: var-lib-kubelet-pods-5c424093\x2ddb8f\x2d4e53\x2d8928\x2ded9369b8ba7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98jbh.mount: Deactivated successfully. Feb 13 15:43:55.280843 systemd[1]: var-lib-kubelet-pods-d5811e8e\x2d9422\x2d48f1\x2d9fb5\x2db8967311d069-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrbxc.mount: Deactivated successfully. Feb 13 15:43:55.280895 systemd[1]: var-lib-kubelet-pods-d5811e8e\x2d9422\x2d48f1\x2d9fb5\x2db8967311d069-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:43:55.280944 systemd[1]: var-lib-kubelet-pods-d5811e8e\x2d9422\x2d48f1\x2d9fb5\x2db8967311d069-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:43:56.170829 kubelet[2874]: I0213 15:43:56.169937 2874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c424093-db8f-4e53-8928-ed9369b8ba7f" path="/var/lib/kubelet/pods/5c424093-db8f-4e53-8928-ed9369b8ba7f/volumes" Feb 13 15:43:56.170829 kubelet[2874]: I0213 15:43:56.170352 2874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" path="/var/lib/kubelet/pods/d5811e8e-9422-48f1-9fb5-b8967311d069/volumes" Feb 13 15:43:56.361258 sshd[4499]: Connection closed by 139.178.89.65 port 48126 Feb 13 15:43:56.362367 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:56.366790 systemd[1]: sshd@37-78.46.147.231:22-139.178.89.65:48126.service: Deactivated successfully. Feb 13 15:43:56.372305 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:43:56.372977 systemd[1]: session-20.scope: Consumed 1.453s CPU time. Feb 13 15:43:56.375983 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:43:56.377938 systemd-logind[1455]: Removed session 20. Feb 13 15:43:56.536790 systemd[1]: Started sshd@38-78.46.147.231:22-139.178.89.65:53496.service - OpenSSH per-connection server daemon (139.178.89.65:53496). Feb 13 15:43:57.382300 kubelet[2874]: E0213 15:43:57.382226 2874 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:43:57.520694 sshd[4663]: Accepted publickey for core from 139.178.89.65 port 53496 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:43:57.523267 sshd-session[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:57.529365 systemd-logind[1455]: New session 21 of user core. Feb 13 15:43:57.534772 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:43:58.464219 kubelet[2874]: I0213 15:43:58.464149 2874 setters.go:580] "Node became not ready" node="ci-4152-2-1-1-287b7b51cc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:43:58Z","lastTransitionTime":"2025-02-13T15:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:43:59.184476 kubelet[2874]: I0213 15:43:59.181479 2874 topology_manager.go:215] "Topology Admit Handler" podUID="c7861e84-dbe6-4f73-9033-2f622dac1ee7" podNamespace="kube-system" podName="cilium-srq8f" Feb 13 15:43:59.184476 kubelet[2874]: E0213 15:43:59.181565 2874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" containerName="mount-cgroup" Feb 13 15:43:59.184476 kubelet[2874]: E0213 15:43:59.181575 2874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" containerName="mount-bpf-fs" Feb 13 15:43:59.184476 kubelet[2874]: E0213 15:43:59.181584 2874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" containerName="clean-cilium-state" Feb 13 15:43:59.184476 kubelet[2874]: E0213 15:43:59.181590 2874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" containerName="apply-sysctl-overwrites" Feb 13 15:43:59.184476 kubelet[2874]: E0213 15:43:59.181596 2874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" containerName="cilium-agent" Feb 13 15:43:59.184476 kubelet[2874]: E0213 15:43:59.181603 2874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c424093-db8f-4e53-8928-ed9369b8ba7f" containerName="cilium-operator" Feb 13 15:43:59.184476 kubelet[2874]: I0213 15:43:59.181628 2874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5811e8e-9422-48f1-9fb5-b8967311d069" containerName="cilium-agent" Feb 13 15:43:59.184476 kubelet[2874]: I0213 15:43:59.181634 2874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c424093-db8f-4e53-8928-ed9369b8ba7f" containerName="cilium-operator" Feb 13 15:43:59.191500 systemd[1]: Created slice kubepods-burstable-podc7861e84_dbe6_4f73_9033_2f622dac1ee7.slice - libcontainer container kubepods-burstable-podc7861e84_dbe6_4f73_9033_2f622dac1ee7.slice. Feb 13 15:43:59.310610 kubelet[2874]: I0213 15:43:59.310313 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7861e84-dbe6-4f73-9033-2f622dac1ee7-hubble-tls\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.310610 kubelet[2874]: I0213 15:43:59.310429 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-cni-path\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.310610 kubelet[2874]: I0213 15:43:59.310489 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-hostproc\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.310610 kubelet[2874]: I0213 15:43:59.310515 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-cilium-run\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.310610 kubelet[2874]: I0213 15:43:59.310543 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-etc-cni-netd\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.310610 kubelet[2874]: I0213 15:43:59.310571 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7861e84-dbe6-4f73-9033-2f622dac1ee7-clustermesh-secrets\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311072 kubelet[2874]: I0213 15:43:59.310641 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7861e84-dbe6-4f73-9033-2f622dac1ee7-cilium-config-path\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311072 kubelet[2874]: I0213 15:43:59.310718 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-cilium-cgroup\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311072 kubelet[2874]: I0213 15:43:59.310761 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-host-proc-sys-net\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311072 kubelet[2874]: I0213 15:43:59.310783 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-host-proc-sys-kernel\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311072 kubelet[2874]: I0213 15:43:59.310834 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5dc\" (UniqueName: \"kubernetes.io/projected/c7861e84-dbe6-4f73-9033-2f622dac1ee7-kube-api-access-mr5dc\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311265 kubelet[2874]: I0213 15:43:59.310866 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-lib-modules\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311265 kubelet[2874]: I0213 15:43:59.310915 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-xtables-lock\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311265 kubelet[2874]: I0213 15:43:59.310936 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7861e84-dbe6-4f73-9033-2f622dac1ee7-bpf-maps\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.311265 kubelet[2874]: I0213 15:43:59.310963 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c7861e84-dbe6-4f73-9033-2f622dac1ee7-cilium-ipsec-secrets\") pod \"cilium-srq8f\" (UID: \"c7861e84-dbe6-4f73-9033-2f622dac1ee7\") " pod="kube-system/cilium-srq8f" Feb 13 15:43:59.322681 sshd[4667]: Connection closed by 139.178.89.65 port 53496 Feb 13 15:43:59.327553 sshd-session[4663]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:59.335336 systemd[1]: sshd@38-78.46.147.231:22-139.178.89.65:53496.service: Deactivated successfully. Feb 13 15:43:59.338294 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:43:59.339759 systemd[1]: session-21.scope: Consumed 1.006s CPU time. Feb 13 15:43:59.341012 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:43:59.343188 systemd-logind[1455]: Removed session 21. Feb 13 15:43:59.496575 containerd[1479]: time="2025-02-13T15:43:59.496428433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srq8f,Uid:c7861e84-dbe6-4f73-9033-2f622dac1ee7,Namespace:kube-system,Attempt:0,}" Feb 13 15:43:59.502489 systemd[1]: Started sshd@39-78.46.147.231:22-139.178.89.65:53508.service - OpenSSH per-connection server daemon (139.178.89.65:53508). Feb 13 15:43:59.541809 containerd[1479]: time="2025-02-13T15:43:59.541691482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:59.542207 containerd[1479]: time="2025-02-13T15:43:59.542154124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:59.542412 containerd[1479]: time="2025-02-13T15:43:59.542282405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:59.542665 containerd[1479]: time="2025-02-13T15:43:59.542590166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:59.561694 systemd[1]: Started cri-containerd-0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540.scope - libcontainer container 0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540. Feb 13 15:43:59.589771 containerd[1479]: time="2025-02-13T15:43:59.589643301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srq8f,Uid:c7861e84-dbe6-4f73-9033-2f622dac1ee7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\"" Feb 13 15:43:59.594837 containerd[1479]: time="2025-02-13T15:43:59.594697680Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:43:59.608666 containerd[1479]: time="2025-02-13T15:43:59.608607532Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea\"" Feb 13 15:43:59.610528 containerd[1479]: time="2025-02-13T15:43:59.609640576Z" level=info msg="StartContainer for \"37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea\"" Feb 13 15:43:59.639956 systemd[1]: Started cri-containerd-37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea.scope - libcontainer container 37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea. Feb 13 15:43:59.671061 containerd[1479]: time="2025-02-13T15:43:59.671003005Z" level=info msg="StartContainer for \"37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea\" returns successfully" Feb 13 15:43:59.683285 systemd[1]: cri-containerd-37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea.scope: Deactivated successfully. Feb 13 15:43:59.725910 containerd[1479]: time="2025-02-13T15:43:59.724648405Z" level=info msg="shim disconnected" id=37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea namespace=k8s.io Feb 13 15:43:59.725910 containerd[1479]: time="2025-02-13T15:43:59.725788770Z" level=warning msg="cleaning up after shim disconnected" id=37c45bbf13cb534d7d751199ce2c4c205fa16c85105aef0931563ce029b378ea namespace=k8s.io Feb 13 15:43:59.726840 containerd[1479]: time="2025-02-13T15:43:59.725834690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:44:00.216431 containerd[1479]: time="2025-02-13T15:44:00.216265520Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:44:00.238755 containerd[1479]: time="2025-02-13T15:44:00.237589120Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993\"" Feb 13 15:44:00.241129 containerd[1479]: time="2025-02-13T15:44:00.240981812Z" level=info msg="StartContainer for \"02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993\"" Feb 13 15:44:00.271150 systemd[1]: Started cri-containerd-02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993.scope - libcontainer container 02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993. Feb 13 15:44:00.307988 containerd[1479]: time="2025-02-13T15:44:00.307924502Z" level=info msg="StartContainer for \"02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993\" returns successfully" Feb 13 15:44:00.322298 systemd[1]: cri-containerd-02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993.scope: Deactivated successfully. Feb 13 15:44:00.356804 containerd[1479]: time="2025-02-13T15:44:00.356699044Z" level=info msg="shim disconnected" id=02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993 namespace=k8s.io Feb 13 15:44:00.357075 containerd[1479]: time="2025-02-13T15:44:00.356825644Z" level=warning msg="cleaning up after shim disconnected" id=02b677f77141fbd23f0da8494dc80a5dd85eeb463cea44f25d87003918181993 namespace=k8s.io Feb 13 15:44:00.357075 containerd[1479]: time="2025-02-13T15:44:00.356843004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:44:00.504270 sshd[4680]: Accepted publickey for core from 139.178.89.65 port 53508 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:44:00.507090 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:00.513600 systemd-logind[1455]: New session 22 of user core. Feb 13 15:44:00.517700 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:44:01.187064 sshd[4846]: Connection closed by 139.178.89.65 port 53508 Feb 13 15:44:01.186965 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:01.194630 systemd[1]: sshd@39-78.46.147.231:22-139.178.89.65:53508.service: Deactivated successfully. Feb 13 15:44:01.201602 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:44:01.202942 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:44:01.205583 systemd-logind[1455]: Removed session 22. Feb 13 15:44:01.225403 containerd[1479]: time="2025-02-13T15:44:01.224932281Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:44:01.252425 containerd[1479]: time="2025-02-13T15:44:01.251810541Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa\"" Feb 13 15:44:01.253184 containerd[1479]: time="2025-02-13T15:44:01.253090266Z" level=info msg="StartContainer for \"652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa\"" Feb 13 15:44:01.288820 systemd[1]: Started cri-containerd-652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa.scope - libcontainer container 652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa. Feb 13 15:44:01.328294 containerd[1479]: time="2025-02-13T15:44:01.328140866Z" level=info msg="StartContainer for \"652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa\" returns successfully" Feb 13 15:44:01.329122 systemd[1]: cri-containerd-652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa.scope: Deactivated successfully. Feb 13 15:44:01.365419 systemd[1]: Started sshd@40-78.46.147.231:22-139.178.89.65:53516.service - OpenSSH per-connection server daemon (139.178.89.65:53516). Feb 13 15:44:01.390472 containerd[1479]: time="2025-02-13T15:44:01.389612455Z" level=info msg="shim disconnected" id=652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa namespace=k8s.io Feb 13 15:44:01.390472 containerd[1479]: time="2025-02-13T15:44:01.389691895Z" level=warning msg="cleaning up after shim disconnected" id=652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa namespace=k8s.io Feb 13 15:44:01.390472 containerd[1479]: time="2025-02-13T15:44:01.389700655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:44:01.421247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-652a9d324446560330093f3b3c81af5a44b3b435a05660593e479d59dc1cb4fa-rootfs.mount: Deactivated successfully. Feb 13 15:44:02.231018 containerd[1479]: time="2025-02-13T15:44:02.230751868Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:44:02.256420 containerd[1479]: time="2025-02-13T15:44:02.256355243Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2\"" Feb 13 15:44:02.259715 containerd[1479]: time="2025-02-13T15:44:02.258302810Z" level=info msg="StartContainer for \"36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2\"" Feb 13 15:44:02.295898 systemd[1]: Started cri-containerd-36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2.scope - libcontainer container 36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2. Feb 13 15:44:02.332851 systemd[1]: cri-containerd-36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2.scope: Deactivated successfully. Feb 13 15:44:02.334578 containerd[1479]: time="2025-02-13T15:44:02.334385213Z" level=info msg="StartContainer for \"36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2\" returns successfully" Feb 13 15:44:02.363107 containerd[1479]: time="2025-02-13T15:44:02.363027720Z" level=info msg="shim disconnected" id=36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2 namespace=k8s.io Feb 13 15:44:02.363695 containerd[1479]: time="2025-02-13T15:44:02.363364401Z" level=warning msg="cleaning up after shim disconnected" id=36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2 namespace=k8s.io Feb 13 15:44:02.363695 containerd[1479]: time="2025-02-13T15:44:02.363384041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:44:02.379826 sshd[4898]: Accepted publickey for core from 139.178.89.65 port 53516 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:44:02.381689 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:02.384004 kubelet[2874]: E0213 15:44:02.383601 2874 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:44:02.387428 systemd-logind[1455]: New session 23 of user core. Feb 13 15:44:02.392671 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:44:02.422366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36e520ef35b1a3c6288a488b6663868fafbaec0a7a2ce5cca3e94663367f25c2-rootfs.mount: Deactivated successfully. Feb 13 15:44:02.497611 systemd[1]: sshd@21-78.46.147.231:22-183.63.103.84:7949.service: Deactivated successfully. Feb 13 15:44:03.238984 containerd[1479]: time="2025-02-13T15:44:03.238936699Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:44:03.264525 containerd[1479]: time="2025-02-13T15:44:03.264053313Z" level=info msg="CreateContainer within sandbox \"0a623b0c2405b57a33f056a6d03358b346ccd6a03fc1b0af4c3a453bb5db6540\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227\"" Feb 13 15:44:03.266481 containerd[1479]: time="2025-02-13T15:44:03.265679319Z" level=info msg="StartContainer for \"04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227\"" Feb 13 15:44:03.296838 systemd[1]: Started cri-containerd-04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227.scope - libcontainer container 04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227. Feb 13 15:44:03.333692 containerd[1479]: time="2025-02-13T15:44:03.333606291Z" level=info msg="StartContainer for \"04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227\" returns successfully" Feb 13 15:44:03.678552 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:44:04.260565 kubelet[2874]: I0213 15:44:04.260482 2874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-srq8f" podStartSLOduration=5.260464777 podStartE2EDuration="5.260464777s" podCreationTimestamp="2025-02-13 15:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:44:04.260201736 +0000 UTC m=+352.251145080" watchObservedRunningTime="2025-02-13 15:44:04.260464777 +0000 UTC m=+352.251408121" Feb 13 15:44:06.916883 systemd-networkd[1369]: lxc_health: Link UP Feb 13 15:44:06.937508 systemd-networkd[1369]: lxc_health: Gained carrier Feb 13 15:44:07.286740 systemd[1]: run-containerd-runc-k8s.io-04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227-runc.X2ArRP.mount: Deactivated successfully. Feb 13 15:44:08.178457 systemd-networkd[1369]: lxc_health: Gained IPv6LL Feb 13 15:44:12.193218 containerd[1479]: time="2025-02-13T15:44:12.193132186Z" level=info msg="StopPodSandbox for \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\"" Feb 13 15:44:12.193721 containerd[1479]: time="2025-02-13T15:44:12.193377509Z" level=info msg="TearDown network for sandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" successfully" Feb 13 15:44:12.193721 containerd[1479]: time="2025-02-13T15:44:12.193413869Z" level=info msg="StopPodSandbox for \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" returns successfully" Feb 13 15:44:12.195646 containerd[1479]: time="2025-02-13T15:44:12.195429495Z" level=info msg="RemovePodSandbox for \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\"" Feb 13 15:44:12.195646 containerd[1479]: time="2025-02-13T15:44:12.195649498Z" level=info msg="Forcibly stopping sandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\"" Feb 13 15:44:12.195806 containerd[1479]: time="2025-02-13T15:44:12.195738699Z" level=info msg="TearDown network for sandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" successfully" Feb 13 15:44:12.199769 containerd[1479]: time="2025-02-13T15:44:12.199693470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:44:12.199954 containerd[1479]: time="2025-02-13T15:44:12.199787511Z" level=info msg="RemovePodSandbox \"070cc58a5be8c29c0cc688ff06da315696fa97f0574891472dd048d7fa0e1c2f\" returns successfully" Feb 13 15:44:12.200517 containerd[1479]: time="2025-02-13T15:44:12.200480640Z" level=info msg="StopPodSandbox for \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\"" Feb 13 15:44:12.200890 containerd[1479]: time="2025-02-13T15:44:12.200588881Z" level=info msg="TearDown network for sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" successfully" Feb 13 15:44:12.200890 containerd[1479]: time="2025-02-13T15:44:12.200601722Z" level=info msg="StopPodSandbox for \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" returns successfully" Feb 13 15:44:12.201041 containerd[1479]: time="2025-02-13T15:44:12.201009767Z" level=info msg="RemovePodSandbox for \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\"" Feb 13 15:44:12.201041 containerd[1479]: time="2025-02-13T15:44:12.201038847Z" level=info msg="Forcibly stopping sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\"" Feb 13 15:44:12.201102 containerd[1479]: time="2025-02-13T15:44:12.201085648Z" level=info msg="TearDown network for sandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" successfully" Feb 13 15:44:12.205111 containerd[1479]: time="2025-02-13T15:44:12.205005778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:44:12.205329 containerd[1479]: time="2025-02-13T15:44:12.205114699Z" level=info msg="RemovePodSandbox \"71d8afcc944e4b6ad39d4c544b860689270cd436dc1d3fb6a253fe9c4dee6207\" returns successfully" Feb 13 15:44:13.803854 systemd[1]: run-containerd-runc-k8s.io-04df31dc763cc146ffde751f509663200261f10d1055f950a3f886acd376b227-runc.P3Ujzk.mount: Deactivated successfully. Feb 13 15:44:14.043596 sshd[4967]: Connection closed by 139.178.89.65 port 53516 Feb 13 15:44:14.044544 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:14.049865 systemd[1]: sshd@40-78.46.147.231:22-139.178.89.65:53516.service: Deactivated successfully. Feb 13 15:44:14.052671 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:44:14.054346 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:44:14.056065 systemd-logind[1455]: Removed session 23. Feb 13 15:44:18.735819 systemd[1]: Started sshd@41-78.46.147.231:22-101.126.78.108:45082.service - OpenSSH per-connection server daemon (101.126.78.108:45082). Feb 13 15:44:19.317172 systemd[1]: Started sshd@42-78.46.147.231:22-183.63.103.84:44361.service - OpenSSH per-connection server daemon (183.63.103.84:44361). Feb 13 15:44:21.661787 sshd[5644]: Invalid user gitlab-runner from 183.63.103.84 port 44361 Feb 13 15:44:21.919852 sshd[5644]: Received disconnect from 183.63.103.84 port 44361:11: Bye Bye [preauth] Feb 13 15:44:21.921188 sshd[5644]: Disconnected from invalid user gitlab-runner 183.63.103.84 port 44361 [preauth] Feb 13 15:44:21.922356 systemd[1]: sshd@42-78.46.147.231:22-183.63.103.84:44361.service: Deactivated successfully. Feb 13 15:44:25.608526 sshd[5641]: Invalid user eversec from 101.126.78.108 port 45082 Feb 13 15:44:25.856834 sshd[5641]: Received disconnect from 101.126.78.108 port 45082:11: Bye Bye [preauth] Feb 13 15:44:25.857124 sshd[5641]: Disconnected from invalid user eversec 101.126.78.108 port 45082 [preauth] Feb 13 15:44:25.860857 systemd[1]: sshd@41-78.46.147.231:22-101.126.78.108:45082.service: Deactivated successfully.