Feb 13 15:34:48.926289 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:34:48.926316 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:34:48.926327 kernel: KASLR enabled Feb 13 15:34:48.926333 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:34:48.926339 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 15:34:48.926344 kernel: random: crng init done Feb 13 15:34:48.926351 kernel: secureboot: Secure boot disabled Feb 13 15:34:48.926357 kernel: ACPI: Early table checksum verification disabled Feb 13 15:34:48.926363 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:34:48.926370 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:34:48.926376 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926382 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926388 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926394 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926401 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926409 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926416 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926422 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926428 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:48.926434 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:34:48.926458 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:34:48.926468 kernel: NUMA: Failed to initialise from firmware Feb 13 15:34:48.926476 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:34:48.926483 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:34:48.926490 kernel: Zone ranges: Feb 13 15:34:48.926498 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:34:48.926504 kernel: DMA32 empty Feb 13 15:34:48.926511 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:34:48.926517 kernel: Movable zone start for each node Feb 13 15:34:48.926523 kernel: Early memory node ranges Feb 13 15:34:48.926529 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 15:34:48.926536 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 15:34:48.926542 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 15:34:48.926548 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:34:48.926554 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:34:48.926560 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:34:48.926566 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:34:48.926574 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:34:48.926580 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:34:48.926586 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:34:48.927495 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:34:48.927512 kernel: psci: probing for conduit method from ACPI. Feb 13 15:34:48.927519 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:34:48.927530 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:34:48.927537 kernel: psci: Trusted OS migration not required Feb 13 15:34:48.927543 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:34:48.927550 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:34:48.927557 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:34:48.927564 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:34:48.927570 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:34:48.927577 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:34:48.927583 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:34:48.927603 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:34:48.927628 kernel: CPU features: detected: Spectre-v4 Feb 13 15:34:48.927635 kernel: CPU features: detected: Spectre-BHB Feb 13 15:34:48.927642 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:34:48.927648 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:34:48.927655 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:34:48.927661 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:34:48.927668 kernel: alternatives: applying boot alternatives Feb 13 15:34:48.927676 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:34:48.927683 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:34:48.927690 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:34:48.927696 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:34:48.927705 kernel: Fallback order for Node 0: 0 Feb 13 15:34:48.927712 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:34:48.927718 kernel: Policy zone: Normal Feb 13 15:34:48.927725 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:34:48.927746 kernel: software IO TLB: area num 2. Feb 13 15:34:48.927754 kernel: software IO TLB: mapped [mem 0x00000000f6e90000-0x00000000fae90000] (64MB) Feb 13 15:34:48.927761 kernel: Memory: 3882296K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 213704K reserved, 0K cma-reserved) Feb 13 15:34:48.927768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:34:48.927774 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:34:48.927793 kernel: rcu: RCU event tracing is enabled. Feb 13 15:34:48.927807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:34:48.927813 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:34:48.927823 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:34:48.927836 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:34:48.927844 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:34:48.927850 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:34:48.927857 kernel: GICv3: 256 SPIs implemented Feb 13 15:34:48.927863 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:34:48.927877 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:34:48.927885 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:34:48.927899 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:34:48.927906 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:34:48.927923 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:34:48.927933 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:34:48.927940 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:34:48.927957 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:34:48.927964 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:34:48.927978 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:34:48.927986 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:34:48.927993 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:34:48.928000 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:34:48.928007 kernel: Console: colour dummy device 80x25 Feb 13 15:34:48.928014 kernel: ACPI: Core revision 20230628 Feb 13 15:34:48.928021 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:34:48.928030 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:34:48.928037 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:34:48.928043 kernel: landlock: Up and running. Feb 13 15:34:48.928050 kernel: SELinux: Initializing. Feb 13 15:34:48.928068 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:34:48.928075 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:34:48.928082 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:34:48.928090 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:34:48.928096 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:34:48.928106 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:34:48.928113 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:34:48.928120 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:34:48.928134 kernel: Remapping and enabling EFI services. Feb 13 15:34:48.928142 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:34:48.928158 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:34:48.928166 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:34:48.928173 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:34:48.928180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:34:48.928189 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:34:48.928196 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:34:48.928217 kernel: SMP: Total of 2 processors activated. Feb 13 15:34:48.928228 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:34:48.928235 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:34:48.928242 kernel: CPU features: detected: Common not Private translations Feb 13 15:34:48.928249 kernel: CPU features: detected: CRC32 instructions Feb 13 15:34:48.928257 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:34:48.928266 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:34:48.928277 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:34:48.928286 kernel: CPU features: detected: Privileged Access Never Feb 13 15:34:48.928294 kernel: CPU features: detected: RAS Extension Support Feb 13 15:34:48.928302 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:34:48.928309 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:34:48.928319 kernel: alternatives: applying system-wide alternatives Feb 13 15:34:48.928326 kernel: devtmpfs: initialized Feb 13 15:34:48.928334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:34:48.928343 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:34:48.928350 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:34:48.928357 kernel: SMBIOS 3.0.0 present. Feb 13 15:34:48.928364 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:34:48.928372 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:34:48.928379 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:34:48.928386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:34:48.928393 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:34:48.928401 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:34:48.928409 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:34:48.928417 kernel: cpuidle: using governor menu Feb 13 15:34:48.928424 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Feb 13 15:34:48.928431 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:34:48.928438 kernel: ASID allocator initialised with 32768 entries Feb 13 15:34:48.928454 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:34:48.928462 kernel: Serial: AMBA PL011 UART driver Feb 13 15:34:48.928469 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:34:48.928476 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:34:48.928486 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:34:48.928493 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:34:48.928500 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:34:48.928507 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:34:48.928514 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:34:48.928521 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:34:48.928528 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:34:48.928536 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:34:48.928543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:34:48.928551 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:34:48.928561 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:34:48.928569 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:34:48.928577 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:34:48.928587 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:34:48.928610 kernel: ACPI: Interpreter enabled Feb 13 15:34:48.928617 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:34:48.928625 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:34:48.928636 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:34:48.928647 kernel: printk: console [ttyAMA0] enabled Feb 13 15:34:48.928666 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:34:48.928842 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:34:48.928916 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:34:48.928980 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:34:48.929060 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:34:48.929127 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:34:48.929149 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:34:48.929157 kernel: PCI host bridge to bus 0000:00 Feb 13 15:34:48.929238 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:34:48.929305 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:34:48.929370 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:34:48.929426 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:34:48.929570 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:34:48.931557 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:34:48.931913 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:34:48.932004 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:34:48.932154 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.932286 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:34:48.932367 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.932460 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:34:48.932539 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.932666 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:34:48.932743 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.932807 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:34:48.932878 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.932941 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:34:48.933017 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.933081 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:34:48.933171 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.933238 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:34:48.933321 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.933391 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:34:48.933567 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:34:48.933680 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:34:48.933764 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:34:48.933831 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:34:48.933928 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:34:48.934001 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:34:48.934093 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:34:48.934162 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:34:48.934238 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:34:48.934306 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:34:48.934380 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:34:48.934491 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:34:48.934578 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:34:48.935110 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:34:48.935199 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:34:48.935301 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:34:48.935531 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:34:48.935712 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:34:48.935806 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:34:48.935889 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:34:48.935960 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:34:48.936039 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:34:48.936112 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:34:48.936181 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:34:48.936251 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:34:48.936328 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:34:48.936475 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:34:48.936554 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:34:48.939759 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:34:48.939850 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:34:48.939914 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:34:48.939984 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:34:48.940058 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:34:48.940121 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:34:48.940189 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:34:48.940253 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:34:48.940317 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:34:48.940386 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:34:48.940496 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:34:48.940571 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:34:48.941264 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:34:48.941348 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:34:48.941414 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:34:48.941531 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:34:48.941696 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:34:48.941767 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:34:48.941865 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:34:48.941942 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:34:48.942071 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:34:48.942145 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:34:48.942211 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:34:48.942275 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:34:48.942342 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:34:48.942406 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:34:48.942500 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:34:48.942568 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:34:48.942708 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:34:48.942777 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:34:48.942907 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:34:48.942976 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:34:48.943102 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:34:48.943196 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:34:48.943266 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:34:48.943331 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:34:48.943396 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:34:48.943475 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:34:48.943543 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:34:48.943678 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:34:48.943757 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:34:48.943821 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:34:48.943886 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:34:48.944017 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:34:48.944099 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:34:48.944163 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:34:48.944230 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:34:48.944298 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:34:48.944397 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:34:48.944511 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:34:48.944581 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:34:48.944745 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:34:48.944817 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:34:48.944882 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:34:48.944972 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:34:48.945046 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:34:48.945114 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:34:48.945184 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:34:48.945248 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:34:48.945311 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:34:48.945375 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:34:48.945438 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:34:48.945539 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:34:48.945632 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:34:48.945710 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:34:48.945818 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:34:48.945897 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:34:48.945999 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:34:48.946067 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:34:48.946130 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:34:48.946204 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:34:48.946276 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:34:48.946340 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:34:48.946403 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:34:48.946530 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:34:48.946676 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:34:48.946795 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:34:48.946926 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:34:48.947027 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:34:48.947168 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:34:48.947252 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:34:48.947361 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:34:48.947432 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:34:48.947566 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:34:48.947677 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:34:48.947792 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:34:48.947923 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:34:48.948038 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:34:48.948112 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:34:48.948191 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:34:48.948301 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:34:48.948403 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:34:48.948618 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:34:48.948771 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:34:48.948915 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:34:48.948987 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:34:48.949089 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:34:48.949169 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:34:48.949245 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:34:48.949311 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:34:48.949386 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:34:48.949471 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:34:48.949542 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:34:48.949690 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:34:48.949777 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:34:48.949848 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:34:48.949910 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:34:48.950023 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:34:48.950090 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:34:48.950156 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:34:48.950217 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:34:48.950278 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:34:48.950339 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:34:48.950430 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:34:48.950506 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:34:48.950570 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:34:48.950730 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:34:48.950793 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:34:48.950850 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:34:48.950916 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:34:48.950973 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:34:48.951037 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:34:48.951125 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:34:48.951227 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:34:48.951299 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:34:48.951368 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:34:48.951428 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:34:48.951534 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:34:48.951698 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:34:48.951766 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:34:48.951829 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:34:48.951897 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:34:48.951960 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:34:48.952019 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:34:48.952087 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:34:48.952146 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:34:48.952205 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:34:48.952277 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:34:48.952337 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:34:48.952399 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:34:48.952488 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:34:48.952551 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:34:48.952685 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:34:48.952699 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:34:48.952708 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:34:48.952716 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:34:48.952723 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:34:48.952742 kernel: iommu: Default domain type: Translated Feb 13 15:34:48.952756 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:34:48.952765 kernel: efivars: Registered efivars operations Feb 13 15:34:48.952773 kernel: vgaarb: loaded Feb 13 15:34:48.952781 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:34:48.952788 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:34:48.952796 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:34:48.952804 kernel: pnp: PnP ACPI init Feb 13 15:34:48.952885 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:34:48.952899 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:34:48.952907 kernel: NET: Registered PF_INET protocol family Feb 13 15:34:48.952914 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:34:48.952922 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:34:48.952929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:34:48.952937 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:34:48.952944 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:34:48.952952 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:34:48.952961 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:34:48.952968 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:34:48.952976 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:34:48.953050 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:34:48.953061 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:34:48.953069 kernel: kvm [1]: HYP mode not available Feb 13 15:34:48.953076 kernel: Initialise system trusted keyrings Feb 13 15:34:48.953084 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:34:48.953092 kernel: Key type asymmetric registered Feb 13 15:34:48.953101 kernel: Asymmetric key parser 'x509' registered Feb 13 15:34:48.953109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:34:48.953116 kernel: io scheduler mq-deadline registered Feb 13 15:34:48.953123 kernel: io scheduler kyber registered Feb 13 15:34:48.953131 kernel: io scheduler bfq registered Feb 13 15:34:48.953139 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:34:48.953205 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:34:48.953268 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:34:48.953333 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.953399 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:34:48.953513 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:34:48.953582 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.953682 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:34:48.953750 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:34:48.953819 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.953888 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:34:48.953952 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:34:48.954016 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.954096 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:34:48.954173 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:34:48.954250 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.954339 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:34:48.954421 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:34:48.954533 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.954705 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:34:48.954778 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:34:48.954852 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.954919 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:34:48.954985 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:34:48.955108 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.955120 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:34:48.955186 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:34:48.955255 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:34:48.955318 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:34:48.955328 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:34:48.955336 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:34:48.955344 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:34:48.955455 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:34:48.955541 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:34:48.955552 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:34:48.955564 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:34:48.955657 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:34:48.955670 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:34:48.955710 kernel: thunder_xcv, ver 1.0 Feb 13 15:34:48.955719 kernel: thunder_bgx, ver 1.0 Feb 13 15:34:48.955727 kernel: nicpf, ver 1.0 Feb 13 15:34:48.955734 kernel: nicvf, ver 1.0 Feb 13 15:34:48.955828 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:34:48.955917 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:34:48 UTC (1739460888) Feb 13 15:34:48.955928 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:34:48.955935 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:34:48.955943 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:34:48.955951 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:34:48.955958 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:34:48.955966 kernel: Segment Routing with IPv6 Feb 13 15:34:48.955973 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:34:48.955981 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:34:48.955992 kernel: Key type dns_resolver registered Feb 13 15:34:48.955999 kernel: registered taskstats version 1 Feb 13 15:34:48.956007 kernel: Loading compiled-in X.509 certificates Feb 13 15:34:48.956015 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:34:48.956022 kernel: Key type .fscrypt registered Feb 13 15:34:48.956030 kernel: Key type fscrypt-provisioning registered Feb 13 15:34:48.956037 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:34:48.956045 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:34:48.956052 kernel: ima: No architecture policies found Feb 13 15:34:48.956061 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:34:48.956069 kernel: clk: Disabling unused clocks Feb 13 15:34:48.956077 kernel: Freeing unused kernel memory: 39936K Feb 13 15:34:48.956084 kernel: Run /init as init process Feb 13 15:34:48.956092 kernel: with arguments: Feb 13 15:34:48.956099 kernel: /init Feb 13 15:34:48.956107 kernel: with environment: Feb 13 15:34:48.956114 kernel: HOME=/ Feb 13 15:34:48.956123 kernel: TERM=linux Feb 13 15:34:48.956131 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:34:48.956141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:34:48.956151 systemd[1]: Detected virtualization kvm. Feb 13 15:34:48.956159 systemd[1]: Detected architecture arm64. Feb 13 15:34:48.956167 systemd[1]: Running in initrd. Feb 13 15:34:48.956175 systemd[1]: No hostname configured, using default hostname. Feb 13 15:34:48.956183 systemd[1]: Hostname set to . Feb 13 15:34:48.956192 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:34:48.956200 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:34:48.956208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:48.956217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:48.956225 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:34:48.956233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:34:48.956241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:34:48.956250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:34:48.956261 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:34:48.956269 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:34:48.956278 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:48.956285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:48.956293 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:34:48.956301 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:34:48.956309 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:34:48.956319 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:34:48.956327 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:34:48.956335 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:34:48.956344 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:34:48.956351 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:34:48.956359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:48.956368 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:48.956376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:48.956384 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:34:48.956393 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:34:48.956401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:34:48.956409 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:34:48.956417 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:34:48.956425 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:34:48.956433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:34:48.956482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:48.956494 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:34:48.956505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:48.956514 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:34:48.956550 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 15:34:48.956573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:34:48.956582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:34:48.956590 kernel: Bridge firewalling registered Feb 13 15:34:48.956622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:48.956631 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:48.956639 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:34:48.956649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:48.956658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:48.956667 systemd-journald[237]: Journal started Feb 13 15:34:48.956691 systemd-journald[237]: Runtime Journal (/run/log/journal/90ba851ac4694578983151c1b5c7910a) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:34:48.917012 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 15:34:48.937076 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 15:34:48.965111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:34:48.969071 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:34:48.975569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:34:48.988678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:48.990360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:48.994433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:48.995684 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:49.001904 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:34:49.009876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:34:49.032197 dracut-cmdline[275]: dracut-dracut-053 Feb 13 15:34:49.035115 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:34:49.042605 systemd-resolved[276]: Positive Trust Anchors: Feb 13 15:34:49.043285 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:34:49.043320 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:34:49.054899 systemd-resolved[276]: Defaulting to hostname 'linux'. Feb 13 15:34:49.057012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:34:49.057983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:49.138655 kernel: SCSI subsystem initialized Feb 13 15:34:49.143655 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:34:49.152647 kernel: iscsi: registered transport (tcp) Feb 13 15:34:49.166679 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:34:49.166770 kernel: QLogic iSCSI HBA Driver Feb 13 15:34:49.219296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:34:49.225872 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:34:49.256876 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:34:49.256944 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:34:49.257686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:34:49.311652 kernel: raid6: neonx8 gen() 15291 MB/s Feb 13 15:34:49.326650 kernel: raid6: neonx4 gen() 15443 MB/s Feb 13 15:34:49.343632 kernel: raid6: neonx2 gen() 12645 MB/s Feb 13 15:34:49.360636 kernel: raid6: neonx1 gen() 9909 MB/s Feb 13 15:34:49.377633 kernel: raid6: int64x8 gen() 6591 MB/s Feb 13 15:34:49.394645 kernel: raid6: int64x4 gen() 7073 MB/s Feb 13 15:34:49.411650 kernel: raid6: int64x2 gen() 5957 MB/s Feb 13 15:34:49.428640 kernel: raid6: int64x1 gen() 4894 MB/s Feb 13 15:34:49.428680 kernel: raid6: using algorithm neonx4 gen() 15443 MB/s Feb 13 15:34:49.445651 kernel: raid6: .... xor() 11751 MB/s, rmw enabled Feb 13 15:34:49.445713 kernel: raid6: using neon recovery algorithm Feb 13 15:34:49.450618 kernel: xor: measuring software checksum speed Feb 13 15:34:49.450649 kernel: 8regs : 21618 MB/sec Feb 13 15:34:49.451885 kernel: 32regs : 16950 MB/sec Feb 13 15:34:49.451945 kernel: arm64_neon : 27682 MB/sec Feb 13 15:34:49.451969 kernel: xor: using function: arm64_neon (27682 MB/sec) Feb 13 15:34:49.504636 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:34:49.520330 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:34:49.528789 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:49.543332 systemd-udevd[458]: Using default interface naming scheme 'v255'. Feb 13 15:34:49.547399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:49.558037 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:34:49.575574 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 15:34:49.612063 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:34:49.618863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:34:49.675566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:49.686995 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:34:49.714335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:34:49.718398 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:34:49.720289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:49.721818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:34:49.730799 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:34:49.760675 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:34:49.787643 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:34:49.804640 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:34:49.806650 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:34:49.817811 kernel: ACPI: bus type USB registered Feb 13 15:34:49.817862 kernel: usbcore: registered new interface driver usbfs Feb 13 15:34:49.818615 kernel: usbcore: registered new interface driver hub Feb 13 15:34:49.819630 kernel: usbcore: registered new device driver usb Feb 13 15:34:49.826451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:34:49.827998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:49.829775 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:49.830432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:49.830663 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:49.834388 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:49.842923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:49.863133 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:34:49.869906 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:34:49.870247 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:34:49.870263 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:34:49.865845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:49.875825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:49.879251 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:34:49.894193 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:34:49.894313 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:34:49.894393 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:34:49.894507 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:34:49.894589 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:34:49.894692 kernel: hub 1-0:1.0: USB hub found Feb 13 15:34:49.894873 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:34:49.894976 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:34:49.899650 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:34:49.899806 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:34:49.899920 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:34:49.899999 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:34:49.900077 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:34:49.900164 kernel: hub 2-0:1.0: USB hub found Feb 13 15:34:49.900262 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:34:49.900272 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:34:49.900351 kernel: GPT:17805311 != 80003071 Feb 13 15:34:49.900360 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:34:49.900369 kernel: GPT:17805311 != 80003071 Feb 13 15:34:49.900377 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:34:49.900386 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:34:49.900395 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:34:49.914249 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:49.949633 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (503) Feb 13 15:34:49.960300 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:34:49.962811 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (517) Feb 13 15:34:49.967430 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:34:49.983357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:34:49.987710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:34:49.988428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:34:49.995803 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:34:50.005429 disk-uuid[576]: Primary Header is updated. Feb 13 15:34:50.005429 disk-uuid[576]: Secondary Entries is updated. Feb 13 15:34:50.005429 disk-uuid[576]: Secondary Header is updated. Feb 13 15:34:50.021672 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:34:50.122671 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:34:50.367646 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:34:50.503620 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:34:50.504720 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:34:50.506644 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:34:50.560644 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:34:50.561021 kernel: usbcore: registered new interface driver usbhid Feb 13 15:34:50.562672 kernel: usbhid: USB HID core driver Feb 13 15:34:51.022652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:34:51.022710 disk-uuid[577]: The operation has completed successfully. Feb 13 15:34:51.089615 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:34:51.089728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:34:51.103816 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:34:51.121721 sh[588]: Success Feb 13 15:34:51.134626 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:34:51.197124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:34:51.200680 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:34:51.202932 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:34:51.235949 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:34:51.236113 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:34:51.236137 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:34:51.236156 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:34:51.236694 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:34:51.242619 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:34:51.245156 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:34:51.245881 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:34:51.250821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:34:51.252158 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:34:51.275686 kernel: BTRFS info (device sda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:34:51.275749 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:34:51.275772 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:34:51.279632 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:34:51.279699 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:34:51.289042 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:34:51.290004 kernel: BTRFS info (device sda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:34:51.295073 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:34:51.303854 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:34:51.385878 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:34:51.393864 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:34:51.420952 ignition[672]: Ignition 2.20.0 Feb 13 15:34:51.420964 ignition[672]: Stage: fetch-offline Feb 13 15:34:51.421031 ignition[672]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:51.421045 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:51.421221 ignition[672]: parsed url from cmdline: "" Feb 13 15:34:51.426053 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:34:51.421225 ignition[672]: no config URL provided Feb 13 15:34:51.428446 systemd-networkd[774]: lo: Link UP Feb 13 15:34:51.421230 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:34:51.428451 systemd-networkd[774]: lo: Gained carrier Feb 13 15:34:51.421239 ignition[672]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:34:51.430204 systemd-networkd[774]: Enumeration completed Feb 13 15:34:51.421245 ignition[672]: failed to fetch config: resource requires networking Feb 13 15:34:51.431188 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:34:51.421479 ignition[672]: Ignition finished successfully Feb 13 15:34:51.431234 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:51.431237 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:51.432853 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:51.432856 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:51.432970 systemd[1]: Reached target network.target - Network. Feb 13 15:34:51.434063 systemd-networkd[774]: eth0: Link UP Feb 13 15:34:51.434067 systemd-networkd[774]: eth0: Gained carrier Feb 13 15:34:51.434076 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:51.437372 systemd-networkd[774]: eth1: Link UP Feb 13 15:34:51.437376 systemd-networkd[774]: eth1: Gained carrier Feb 13 15:34:51.437384 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:51.438779 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:34:51.458544 ignition[777]: Ignition 2.20.0 Feb 13 15:34:51.458555 ignition[777]: Stage: fetch Feb 13 15:34:51.459511 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:51.459529 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:51.459678 ignition[777]: parsed url from cmdline: "" Feb 13 15:34:51.459683 ignition[777]: no config URL provided Feb 13 15:34:51.459689 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:34:51.459699 ignition[777]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:34:51.463906 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:34:51.459787 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:34:51.462588 ignition[777]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:34:51.493722 systemd-networkd[774]: eth0: DHCPv4 address 49.13.212.147/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:34:51.663055 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:34:51.672059 ignition[777]: GET result: OK Feb 13 15:34:51.672207 ignition[777]: parsing config with SHA512: 8d13abe072c52a1763b346a66357d3894f4bcf2d480db7460788e4204f322cb3646fa64db9bcbaa9b86848ccae8ecb1d34ed1ea47b0767ce500ea1081ff629cf Feb 13 15:34:51.679979 unknown[777]: fetched base config from "system" Feb 13 15:34:51.680777 ignition[777]: fetch: fetch complete Feb 13 15:34:51.679992 unknown[777]: fetched base config from "system" Feb 13 15:34:51.680785 ignition[777]: fetch: fetch passed Feb 13 15:34:51.680003 unknown[777]: fetched user config from "hetzner" Feb 13 15:34:51.680854 ignition[777]: Ignition finished successfully Feb 13 15:34:51.683403 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:34:51.688801 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:34:51.701635 ignition[785]: Ignition 2.20.0 Feb 13 15:34:51.701645 ignition[785]: Stage: kargs Feb 13 15:34:51.701827 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:51.701837 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:51.702807 ignition[785]: kargs: kargs passed Feb 13 15:34:51.702864 ignition[785]: Ignition finished successfully Feb 13 15:34:51.705947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:34:51.710819 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:34:51.723579 ignition[792]: Ignition 2.20.0 Feb 13 15:34:51.723605 ignition[792]: Stage: disks Feb 13 15:34:51.723811 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:51.723822 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:51.724870 ignition[792]: disks: disks passed Feb 13 15:34:51.724928 ignition[792]: Ignition finished successfully Feb 13 15:34:51.729685 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:34:51.731085 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:34:51.731725 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:34:51.732355 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:34:51.733094 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:34:51.733796 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:34:51.740818 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:34:51.756585 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:34:51.762516 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:34:51.768809 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:34:51.819922 kernel: EXT4-fs (sda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:34:51.821387 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:34:51.823643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:34:51.832812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:34:51.836323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:34:51.841845 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:34:51.844806 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:34:51.844848 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:34:51.847286 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:34:51.853070 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (809) Feb 13 15:34:51.855153 kernel: BTRFS info (device sda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:34:51.855211 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:34:51.855222 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:34:51.856937 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:34:51.866233 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:34:51.866296 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:34:51.872079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:34:51.927539 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:34:51.934691 coreos-metadata[811]: Feb 13 15:34:51.934 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:34:51.937125 coreos-metadata[811]: Feb 13 15:34:51.937 INFO Fetch successful Feb 13 15:34:51.937976 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:34:51.939227 coreos-metadata[811]: Feb 13 15:34:51.938 INFO wrote hostname ci-4186-1-1-3-ffab21d6e1 to /sysroot/etc/hostname Feb 13 15:34:51.940606 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:34:51.947290 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:34:51.953311 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:34:52.061844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:34:52.079454 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:34:52.084798 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:34:52.089718 kernel: BTRFS info (device sda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:34:52.114445 ignition[926]: INFO : Ignition 2.20.0 Feb 13 15:34:52.114445 ignition[926]: INFO : Stage: mount Feb 13 15:34:52.116205 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:52.116205 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:52.118297 ignition[926]: INFO : mount: mount passed Feb 13 15:34:52.118297 ignition[926]: INFO : Ignition finished successfully Feb 13 15:34:52.117941 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:34:52.119166 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:34:52.123805 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:34:52.235498 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:34:52.242829 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:34:52.253652 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (937) Feb 13 15:34:52.255719 kernel: BTRFS info (device sda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:34:52.255759 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:34:52.255782 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:34:52.259627 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:34:52.259690 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:34:52.263077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:34:52.285156 ignition[954]: INFO : Ignition 2.20.0 Feb 13 15:34:52.285944 ignition[954]: INFO : Stage: files Feb 13 15:34:52.286626 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:52.287675 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:52.289060 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:34:52.291302 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:34:52.292224 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:34:52.296381 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:34:52.297844 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:34:52.299512 unknown[954]: wrote ssh authorized keys file for user: core Feb 13 15:34:52.300870 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:34:52.304177 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:34:52.306730 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:34:52.357658 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:34:52.572701 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:34:52.572701 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:34:52.575291 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:34:52.776885 systemd-networkd[774]: eth1: Gained IPv6LL Feb 13 15:34:53.030873 systemd-networkd[774]: eth0: Gained IPv6LL Feb 13 15:34:53.144111 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:34:53.230489 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:34:53.243567 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:34:53.243567 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:34:53.243567 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:34:53.243567 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:34:53.243567 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:34:53.243567 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:34:53.742082 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:34:54.076891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:34:54.076891 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:34:54.081853 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:34:54.081853 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:34:54.081853 ignition[954]: INFO : files: files passed Feb 13 15:34:54.081853 ignition[954]: INFO : Ignition finished successfully Feb 13 15:34:54.082778 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:34:54.092794 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:34:54.095821 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:34:54.100501 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:34:54.101687 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:34:54.119397 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:54.119397 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:54.121937 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:54.123218 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:34:54.125632 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:34:54.131790 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:34:54.172946 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:34:54.173111 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:34:54.177392 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:34:54.178379 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:34:54.179735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:34:54.184908 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:34:54.202253 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:34:54.210870 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:34:54.225510 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:54.226455 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:54.228084 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:34:54.229543 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:34:54.229750 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:34:54.231280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:34:54.232502 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:34:54.233494 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:34:54.234516 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:34:54.235755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:34:54.236882 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:34:54.237931 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:34:54.239181 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:34:54.240366 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:34:54.241402 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:34:54.242208 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:34:54.242372 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:34:54.243681 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:54.244799 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:54.245932 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:34:54.246426 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:54.247363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:34:54.247553 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:34:54.249014 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:34:54.249261 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:34:54.250345 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:34:54.250513 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:34:54.251354 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:34:54.251527 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:34:54.266821 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:34:54.268342 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:34:54.268813 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:54.275857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:34:54.276834 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:34:54.276996 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:54.279881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:34:54.279988 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:34:54.288860 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:34:54.290358 ignition[1006]: INFO : Ignition 2.20.0 Feb 13 15:34:54.290358 ignition[1006]: INFO : Stage: umount Feb 13 15:34:54.290358 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:54.290358 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:34:54.288969 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:34:54.295941 ignition[1006]: INFO : umount: umount passed Feb 13 15:34:54.295941 ignition[1006]: INFO : Ignition finished successfully Feb 13 15:34:54.295981 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:34:54.296117 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:34:54.300163 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:34:54.300279 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:34:54.302146 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:34:54.302743 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:34:54.304888 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:34:54.304982 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:34:54.305760 systemd[1]: Stopped target network.target - Network. Feb 13 15:34:54.307761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:34:54.307833 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:34:54.309962 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:34:54.311721 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:34:54.315762 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:54.318303 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:34:54.319623 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:34:54.321524 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:34:54.321608 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:34:54.323270 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:34:54.323332 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:34:54.324638 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:34:54.324718 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:34:54.326025 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:34:54.326095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:34:54.327903 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:34:54.329426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:34:54.331675 systemd-networkd[774]: eth1: DHCPv6 lease lost Feb 13 15:34:54.332579 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:34:54.333384 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:34:54.333547 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:34:54.335336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:34:54.335778 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:34:54.337692 systemd-networkd[774]: eth0: DHCPv6 lease lost Feb 13 15:34:54.339989 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:34:54.340118 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:34:54.341493 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:34:54.341536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:54.346800 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:34:54.347315 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:34:54.347384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:34:54.352424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:54.353872 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:34:54.354450 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:34:54.365078 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:34:54.365238 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:54.368949 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:34:54.369046 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:54.371745 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:34:54.371806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:54.373541 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:34:54.373673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:34:54.375334 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:34:54.375500 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:54.377422 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:34:54.377495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:54.379131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:34:54.379165 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:54.381119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:34:54.381167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:34:54.383618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:34:54.383667 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:34:54.385427 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:34:54.385474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:54.392810 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:34:54.393422 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:34:54.393486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:54.395992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:54.396046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:54.402607 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:34:54.402743 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:34:54.404328 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:34:54.408784 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:34:54.419067 systemd[1]: Switching root. Feb 13 15:34:54.445791 systemd-journald[237]: Journal stopped Feb 13 15:34:55.445820 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 15:34:55.445898 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:34:55.445915 kernel: SELinux: policy capability open_perms=1 Feb 13 15:34:55.445929 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:34:55.445938 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:34:55.445948 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:34:55.445961 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:34:55.445971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:34:55.445983 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:34:55.445993 kernel: audit: type=1403 audit(1739460894.638:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:34:55.446007 systemd[1]: Successfully loaded SELinux policy in 36.670ms. Feb 13 15:34:55.446027 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.157ms. Feb 13 15:34:55.446040 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:34:55.446051 systemd[1]: Detected virtualization kvm. Feb 13 15:34:55.446062 systemd[1]: Detected architecture arm64. Feb 13 15:34:55.446073 systemd[1]: Detected first boot. Feb 13 15:34:55.446084 systemd[1]: Hostname set to . Feb 13 15:34:55.446095 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:34:55.446106 zram_generator::config[1049]: No configuration found. Feb 13 15:34:55.446119 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:34:55.446130 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:34:55.446140 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:34:55.446151 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:34:55.446162 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:34:55.446176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:34:55.446187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:34:55.446197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:34:55.446210 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:34:55.446222 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:34:55.446233 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:34:55.446243 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:34:55.446254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:55.446265 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:55.446275 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:34:55.446286 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:34:55.446297 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:34:55.446312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:34:55.446323 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:34:55.446333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:55.446344 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:34:55.446355 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:34:55.446365 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:34:55.446377 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:34:55.446389 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:55.446411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:34:55.446426 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:34:55.446437 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:34:55.446448 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:34:55.446459 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:34:55.446469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:55.446480 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:55.446490 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:55.446503 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:34:55.446514 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:34:55.446524 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:34:55.446535 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:34:55.446545 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:34:55.446559 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:34:55.446573 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:34:55.446584 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:34:55.449656 systemd[1]: Reached target machines.target - Containers. Feb 13 15:34:55.449693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:34:55.449705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:55.449724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:34:55.449737 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:34:55.449748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:55.449765 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:34:55.449776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:55.449787 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:34:55.449797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:55.449809 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:34:55.449820 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:34:55.449831 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:34:55.449841 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:34:55.449853 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:34:55.449864 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:34:55.449876 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:34:55.449887 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:34:55.449897 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:34:55.449908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:34:55.449919 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:34:55.449930 systemd[1]: Stopped verity-setup.service. Feb 13 15:34:55.449943 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:34:55.449954 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:34:55.449965 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:34:55.449976 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:34:55.449986 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:34:55.449997 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:34:55.450009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:55.450021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:34:55.450031 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:34:55.450042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:55.450052 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:55.450063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:55.450074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:55.450085 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:34:55.450096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:55.450109 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:34:55.450121 kernel: loop: module loaded Feb 13 15:34:55.450167 systemd-journald[1119]: Collecting audit messages is disabled. Feb 13 15:34:55.450194 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:34:55.450207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:34:55.450218 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:55.450228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:55.450240 systemd-journald[1119]: Journal started Feb 13 15:34:55.450267 systemd-journald[1119]: Runtime Journal (/run/log/journal/90ba851ac4694578983151c1b5c7910a) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:34:55.171053 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:34:55.192724 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:34:55.193204 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:34:55.451718 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:34:55.462121 kernel: fuse: init (API version 7.39) Feb 13 15:34:55.462816 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:34:55.465063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:34:55.465801 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:34:55.473362 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:34:55.476627 kernel: ACPI: bus type drm_connector registered Feb 13 15:34:55.481794 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:34:55.484714 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:34:55.484760 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:34:55.487324 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:34:55.493900 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:34:55.497836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:34:55.500845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:55.511919 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:34:55.518467 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:34:55.520725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:55.528817 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:34:55.529992 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:55.532880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:55.536866 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:34:55.541776 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:34:55.547927 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:34:55.548194 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:34:55.551156 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:34:55.554172 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:34:55.575262 systemd-journald[1119]: Time spent on flushing to /var/log/journal/90ba851ac4694578983151c1b5c7910a is 78.149ms for 1132 entries. Feb 13 15:34:55.575262 systemd-journald[1119]: System Journal (/var/log/journal/90ba851ac4694578983151c1b5c7910a) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:34:55.669045 systemd-journald[1119]: Received client request to flush runtime journal. Feb 13 15:34:55.669104 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 15:34:55.669245 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:34:55.577471 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:34:55.581221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:34:55.586857 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:34:55.591454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:55.601255 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:34:55.645078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:55.660641 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:34:55.671336 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:34:55.674888 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:34:55.676388 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:34:55.681376 kernel: loop1: detected capacity change from 0 to 8 Feb 13 15:34:55.697643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:34:55.703715 kernel: loop2: detected capacity change from 0 to 113552 Feb 13 15:34:55.707782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:34:55.739659 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 15:34:55.745689 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Feb 13 15:34:55.746287 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Feb 13 15:34:55.759750 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:55.792661 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 15:34:55.821641 kernel: loop5: detected capacity change from 0 to 8 Feb 13 15:34:55.823627 kernel: loop6: detected capacity change from 0 to 113552 Feb 13 15:34:55.839649 kernel: loop7: detected capacity change from 0 to 116784 Feb 13 15:34:55.852360 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:34:55.854124 (sd-merge)[1190]: Merged extensions into '/usr'. Feb 13 15:34:55.860803 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:34:55.861774 systemd[1]: Reloading... Feb 13 15:34:55.932625 zram_generator::config[1216]: No configuration found. Feb 13 15:34:56.144368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:56.171846 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:34:56.199528 systemd[1]: Reloading finished in 337 ms. Feb 13 15:34:56.227765 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:34:56.229011 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:34:56.242994 systemd[1]: Starting ensure-sysext.service... Feb 13 15:34:56.246024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:34:56.259808 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:34:56.259836 systemd[1]: Reloading... Feb 13 15:34:56.286682 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:34:56.286980 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:34:56.288807 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:34:56.289083 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Feb 13 15:34:56.289137 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Feb 13 15:34:56.293023 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:34:56.293168 systemd-tmpfiles[1254]: Skipping /boot Feb 13 15:34:56.305176 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:34:56.305348 systemd-tmpfiles[1254]: Skipping /boot Feb 13 15:34:56.341626 zram_generator::config[1281]: No configuration found. Feb 13 15:34:56.448316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:56.498772 systemd[1]: Reloading finished in 238 ms. Feb 13 15:34:56.520209 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:34:56.521562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:56.535953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:34:56.550045 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:34:56.555817 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:34:56.566243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:34:56.572680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:56.583797 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:34:56.601209 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:34:56.607237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:56.624965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:56.631740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:56.635505 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 15:34:56.641289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:56.643810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:56.646723 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:34:56.649086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:56.649912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:56.662679 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:34:56.664533 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:34:56.669188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:56.670675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:56.672162 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:56.672709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:56.683967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:56.693667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:56.703875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:56.719992 augenrules[1373]: No rules Feb 13 15:34:56.728945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:56.734902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:56.736735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:56.740628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:34:56.743361 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:34:56.744559 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:34:56.746834 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:34:56.747892 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:34:56.751162 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:34:56.752243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:56.752377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:56.762894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:56.763066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:56.781859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:34:56.782663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:56.787798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:56.793773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:34:56.799787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:56.800796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:56.800870 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:34:56.802005 systemd[1]: Finished ensure-sysext.service. Feb 13 15:34:56.803220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:56.803366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:56.820350 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:56.820567 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:56.826630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:56.841156 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:34:56.855572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:56.855752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:56.858986 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:34:56.859137 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:34:56.870148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:56.908557 augenrules[1395]: /sbin/augenrules: No change Feb 13 15:34:56.910967 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:34:56.934900 augenrules[1427]: No rules Feb 13 15:34:56.939660 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:34:56.939859 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:34:56.973947 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 15:34:56.974349 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:34:56.974401 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:34:56.977040 systemd-networkd[1387]: lo: Link UP Feb 13 15:34:56.977049 systemd-networkd[1387]: lo: Gained carrier Feb 13 15:34:56.980928 systemd-networkd[1387]: Enumeration completed Feb 13 15:34:56.981060 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:34:56.983863 systemd-resolved[1324]: Using system hostname 'ci-4186-1-1-3-ffab21d6e1'. Feb 13 15:34:56.986758 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:56.986771 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:56.990887 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:56.990898 systemd-networkd[1387]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:56.992023 systemd-networkd[1387]: eth0: Link UP Feb 13 15:34:56.992033 systemd-networkd[1387]: eth0: Gained carrier Feb 13 15:34:56.992052 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:57.000233 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:34:57.001030 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:34:57.001776 systemd[1]: Reached target network.target - Network. Feb 13 15:34:57.002560 systemd-networkd[1387]: eth1: Link UP Feb 13 15:34:57.002568 systemd-networkd[1387]: eth1: Gained carrier Feb 13 15:34:57.002586 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:57.002851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:57.020720 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:34:57.023750 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:34:57.026735 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:57.037686 systemd-networkd[1387]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:34:57.039087 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Feb 13 15:34:57.064491 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:34:57.064628 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1356) Feb 13 15:34:57.071803 systemd-networkd[1387]: eth0: DHCPv4 address 49.13.212.147/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:34:57.072697 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Feb 13 15:34:57.086380 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:57.095987 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:34:57.101031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:57.107670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:57.110862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:57.123803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:57.124994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:57.125342 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:34:57.140079 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:57.140290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:57.145588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:57.147684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:57.148704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:57.158753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:57.159668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:57.162876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:34:57.170517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:34:57.171284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:57.178831 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:34:57.178908 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:34:57.178947 kernel: [drm] features: -context_init Feb 13 15:34:57.196248 kernel: [drm] number of scanouts: 1 Feb 13 15:34:57.196346 kernel: [drm] number of cap sets: 0 Feb 13 15:34:57.197625 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:34:57.200134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:57.202024 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:34:57.214655 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:34:57.231204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:57.231478 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:57.232188 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:34:57.237986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:57.309698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:57.369651 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:34:57.374798 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:34:57.391660 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:34:57.417955 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:34:57.421206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:57.422232 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:34:57.423199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:34:57.424100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:34:57.425042 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:34:57.425758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:34:57.426489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:34:57.427250 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:34:57.427281 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:34:57.427807 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:34:57.429552 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:34:57.431857 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:34:57.436677 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:34:57.439014 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:34:57.441732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:34:57.443308 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:34:57.444657 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:34:57.445806 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:34:57.445839 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:34:57.448781 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:34:57.449884 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:34:57.458897 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:34:57.462948 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:34:57.465927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:34:57.468836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:34:57.470727 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:34:57.472831 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:34:57.476759 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:34:57.482803 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:34:57.487880 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:34:57.492525 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:34:57.497787 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:34:57.500927 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:34:57.501453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:34:57.503898 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:34:57.506104 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:34:57.508483 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:34:57.527018 jq[1478]: false Feb 13 15:34:57.531349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:34:57.531660 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:34:57.550609 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:34:57.550820 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:34:57.563214 jq[1490]: true Feb 13 15:34:57.567929 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:34:57.580140 jq[1509]: true Feb 13 15:34:57.584676 update_engine[1489]: I20250213 15:34:57.583165 1489 main.cc:92] Flatcar Update Engine starting Feb 13 15:34:57.590684 dbus-daemon[1477]: [system] SELinux support is enabled Feb 13 15:34:57.591917 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:34:57.595762 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:34:57.597066 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:34:57.601056 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:34:57.603748 tar[1497]: linux-arm64/helm Feb 13 15:34:57.601121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:34:57.602244 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:34:57.602267 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:34:57.612748 extend-filesystems[1481]: Found loop4 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found loop5 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found loop6 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found loop7 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda1 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda2 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda3 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found usr Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda4 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda6 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda7 Feb 13 15:34:57.612748 extend-filesystems[1481]: Found sda9 Feb 13 15:34:57.612748 extend-filesystems[1481]: Checking size of /dev/sda9 Feb 13 15:34:57.613782 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:34:57.665281 coreos-metadata[1476]: Feb 13 15:34:57.616 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:34:57.665281 coreos-metadata[1476]: Feb 13 15:34:57.625 INFO Fetch successful Feb 13 15:34:57.665281 coreos-metadata[1476]: Feb 13 15:34:57.625 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:34:57.665281 coreos-metadata[1476]: Feb 13 15:34:57.626 INFO Fetch successful Feb 13 15:34:57.665668 update_engine[1489]: I20250213 15:34:57.615851 1489 update_check_scheduler.cc:74] Next update check in 3m56s Feb 13 15:34:57.637917 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:34:57.674717 extend-filesystems[1481]: Resized partition /dev/sda9 Feb 13 15:34:57.683915 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:34:57.696836 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:34:57.739679 systemd-logind[1487]: New seat seat0. Feb 13 15:34:57.761342 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:34:57.761366 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:34:57.761846 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:34:57.766442 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:34:57.771547 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:34:57.802448 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1356) Feb 13 15:34:57.802510 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:34:57.803686 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:34:57.837124 systemd[1]: Starting sshkeys.service... Feb 13 15:34:57.864005 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:34:57.868080 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:34:57.878033 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:34:57.885658 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:34:57.898101 coreos-metadata[1559]: Feb 13 15:34:57.897 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:34:57.908172 coreos-metadata[1559]: Feb 13 15:34:57.899 INFO Fetch successful Feb 13 15:34:57.909497 unknown[1559]: wrote ssh authorized keys file for user: core Feb 13 15:34:57.911462 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:34:57.912613 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:34:57.912613 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:34:57.912613 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:34:57.911682 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:34:57.921844 extend-filesystems[1481]: Resized filesystem in /dev/sda9 Feb 13 15:34:57.921844 extend-filesystems[1481]: Found sr0 Feb 13 15:34:57.951094 containerd[1505]: time="2025-02-13T15:34:57.951004400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:34:57.964569 update-ssh-keys[1565]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:34:57.964931 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:34:57.969091 systemd[1]: Finished sshkeys.service. Feb 13 15:34:58.002709 containerd[1505]: time="2025-02-13T15:34:58.002654520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.006790 containerd[1505]: time="2025-02-13T15:34:58.006573120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.006995960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.007048280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.007362880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.007424520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.007561360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.007589840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.007984120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.008019200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.008051120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.008076560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.008725 containerd[1505]: time="2025-02-13T15:34:58.008253160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.009941 containerd[1505]: time="2025-02-13T15:34:58.009827120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:58.010379 containerd[1505]: time="2025-02-13T15:34:58.010334680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:58.010577 containerd[1505]: time="2025-02-13T15:34:58.010538680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:34:58.010952 containerd[1505]: time="2025-02-13T15:34:58.010913880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:34:58.011319 containerd[1505]: time="2025-02-13T15:34:58.011158320Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:34:58.018860 containerd[1505]: time="2025-02-13T15:34:58.017352440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:34:58.018860 containerd[1505]: time="2025-02-13T15:34:58.017425840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:34:58.018860 containerd[1505]: time="2025-02-13T15:34:58.017445840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:34:58.018860 containerd[1505]: time="2025-02-13T15:34:58.017465640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:34:58.018860 containerd[1505]: time="2025-02-13T15:34:58.017482960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:34:58.018860 containerd[1505]: time="2025-02-13T15:34:58.018717640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:34:58.019331 containerd[1505]: time="2025-02-13T15:34:58.019307280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:34:58.019563 containerd[1505]: time="2025-02-13T15:34:58.019540200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:34:58.019669 containerd[1505]: time="2025-02-13T15:34:58.019653560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021628960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021656840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021674880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021689360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021705800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021724200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021738480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021752080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021764760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021789320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021807120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021833760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021850160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024521 containerd[1505]: time="2025-02-13T15:34:58.021863120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021878800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021892880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021907160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021921040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021935640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021953160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021967560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021982000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.021998320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.022023400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.022037960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.022049960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.022278280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:34:58.024861 containerd[1505]: time="2025-02-13T15:34:58.022299200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:34:58.025103 containerd[1505]: time="2025-02-13T15:34:58.022312120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:34:58.025103 containerd[1505]: time="2025-02-13T15:34:58.022433360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:34:58.025103 containerd[1505]: time="2025-02-13T15:34:58.022450640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.025103 containerd[1505]: time="2025-02-13T15:34:58.022467200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:34:58.025103 containerd[1505]: time="2025-02-13T15:34:58.022478600Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:34:58.025103 containerd[1505]: time="2025-02-13T15:34:58.022507600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:34:58.025215 containerd[1505]: time="2025-02-13T15:34:58.022919600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:34:58.025215 containerd[1505]: time="2025-02-13T15:34:58.022977840Z" level=info msg="Connect containerd service" Feb 13 15:34:58.025215 containerd[1505]: time="2025-02-13T15:34:58.023015680Z" level=info msg="using legacy CRI server" Feb 13 15:34:58.025215 containerd[1505]: time="2025-02-13T15:34:58.023022560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:34:58.025215 containerd[1505]: time="2025-02-13T15:34:58.023294440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:34:58.025215 containerd[1505]: time="2025-02-13T15:34:58.024016560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:34:58.025575 containerd[1505]: time="2025-02-13T15:34:58.025538800Z" level=info msg="Start subscribing containerd event" Feb 13 15:34:58.025676 containerd[1505]: time="2025-02-13T15:34:58.025663320Z" level=info msg="Start recovering state" Feb 13 15:34:58.025786 containerd[1505]: time="2025-02-13T15:34:58.025773520Z" level=info msg="Start event monitor" Feb 13 15:34:58.025839 containerd[1505]: time="2025-02-13T15:34:58.025828000Z" level=info msg="Start snapshots syncer" Feb 13 15:34:58.025898 containerd[1505]: time="2025-02-13T15:34:58.025885120Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:34:58.025942 containerd[1505]: time="2025-02-13T15:34:58.025931800Z" level=info msg="Start streaming server" Feb 13 15:34:58.029891 containerd[1505]: time="2025-02-13T15:34:58.029864200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:34:58.030131 containerd[1505]: time="2025-02-13T15:34:58.030113960Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:34:58.030441 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:34:58.033805 containerd[1505]: time="2025-02-13T15:34:58.033780720Z" level=info msg="containerd successfully booted in 0.085193s" Feb 13 15:34:58.305158 tar[1497]: linux-arm64/LICENSE Feb 13 15:34:58.305245 tar[1497]: linux-arm64/README.md Feb 13 15:34:58.324522 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:34:58.342749 systemd-networkd[1387]: eth1: Gained IPv6LL Feb 13 15:34:58.344138 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Feb 13 15:34:58.347215 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:34:58.350861 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:34:58.360747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:58.364178 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:34:58.393735 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:34:58.442027 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:34:58.465796 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:34:58.473050 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:34:58.483110 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:34:58.483326 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:34:58.491426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:34:58.501309 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:34:58.513226 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:34:58.518002 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:34:58.520080 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:34:58.535781 systemd-networkd[1387]: eth0: Gained IPv6LL Feb 13 15:34:58.537440 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Feb 13 15:34:59.116721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:59.118319 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:34:59.123574 systemd[1]: Startup finished in 845ms (kernel) + 5.942s (initrd) + 4.521s (userspace) = 11.308s. Feb 13 15:34:59.127285 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:34:59.137800 agetty[1601]: failed to open credentials directory Feb 13 15:34:59.142046 agetty[1600]: failed to open credentials directory Feb 13 15:34:59.743869 kubelet[1607]: E0213 15:34:59.743799 1607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:34:59.746865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:34:59.747010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:09.997921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:10.007958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:10.131690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:10.137030 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:10.197401 kubelet[1627]: E0213 15:35:10.197294 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:10.201047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:10.201236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:20.452085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:35:20.461002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:20.580417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:20.595155 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:20.652638 kubelet[1642]: E0213 15:35:20.652581 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:20.656339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:20.656678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:28.899242 systemd-timesyncd[1408]: Contacted time server 5.9.145.2:123 (2.flatcar.pool.ntp.org). Feb 13 15:35:28.899352 systemd-timesyncd[1408]: Initial clock synchronization to Thu 2025-02-13 15:35:28.601683 UTC. Feb 13 15:35:30.907376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:35:30.915935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:31.025058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:31.039725 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:31.089708 kubelet[1659]: E0213 15:35:31.089541 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:31.092920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:31.093161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:41.241117 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:35:41.249931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:41.363582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:41.377335 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:41.436975 kubelet[1675]: E0213 15:35:41.436912 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:41.440892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:41.441259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:42.831502 update_engine[1489]: I20250213 15:35:42.831375 1489 update_attempter.cc:509] Updating boot flags... Feb 13 15:35:42.881700 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1692) Feb 13 15:35:42.938681 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1691) Feb 13 15:35:51.491266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:35:51.506056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:51.620858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:51.623523 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:51.678040 kubelet[1709]: E0213 15:35:51.677932 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:51.680749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:51.680958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:01.740863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:36:01.748030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:01.869441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:01.884643 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:01.939789 kubelet[1725]: E0213 15:36:01.939667 1725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:01.942892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:01.943089 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:11.990994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:36:11.998019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:12.114447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:12.131129 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:12.180742 kubelet[1741]: E0213 15:36:12.180660 1741 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:12.184480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:12.184651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:22.241102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:36:22.250901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:22.356120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:22.367101 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:22.424752 kubelet[1756]: E0213 15:36:22.424687 1756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:22.427906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:22.428065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:32.491174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 15:36:32.503912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:32.611921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:32.624735 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:32.677306 kubelet[1772]: E0213 15:36:32.677233 1772 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:32.680274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:32.680423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:42.740794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 15:36:42.749947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:42.879096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:42.885963 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:42.932216 kubelet[1788]: E0213 15:36:42.932169 1788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:42.935385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:42.935637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:48.920010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:36:48.931081 systemd[1]: Started sshd@0-49.13.212.147:22-139.178.89.65:35556.service - OpenSSH per-connection server daemon (139.178.89.65:35556). Feb 13 15:36:49.921419 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 35556 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:36:49.925440 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:49.935568 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:36:49.943914 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:36:49.946070 systemd-logind[1487]: New session 1 of user core. Feb 13 15:36:49.959975 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:36:49.966010 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:36:49.985792 (systemd)[1803]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:36:50.094837 systemd[1803]: Queued start job for default target default.target. Feb 13 15:36:50.104646 systemd[1803]: Created slice app.slice - User Application Slice. Feb 13 15:36:50.104859 systemd[1803]: Reached target paths.target - Paths. Feb 13 15:36:50.104953 systemd[1803]: Reached target timers.target - Timers. Feb 13 15:36:50.106674 systemd[1803]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:36:50.123349 systemd[1803]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:36:50.123749 systemd[1803]: Reached target sockets.target - Sockets. Feb 13 15:36:50.123774 systemd[1803]: Reached target basic.target - Basic System. Feb 13 15:36:50.123875 systemd[1803]: Reached target default.target - Main User Target. Feb 13 15:36:50.123911 systemd[1803]: Startup finished in 129ms. Feb 13 15:36:50.124494 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:36:50.135987 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:36:50.830616 systemd[1]: Started sshd@1-49.13.212.147:22-139.178.89.65:35568.service - OpenSSH per-connection server daemon (139.178.89.65:35568). Feb 13 15:36:51.820291 sshd[1814]: Accepted publickey for core from 139.178.89.65 port 35568 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:36:51.822388 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:51.828393 systemd-logind[1487]: New session 2 of user core. Feb 13 15:36:51.834864 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:36:52.500631 sshd[1816]: Connection closed by 139.178.89.65 port 35568 Feb 13 15:36:52.501680 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:52.506535 systemd[1]: sshd@1-49.13.212.147:22-139.178.89.65:35568.service: Deactivated successfully. Feb 13 15:36:52.508340 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:36:52.510454 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:36:52.512080 systemd-logind[1487]: Removed session 2. Feb 13 15:36:52.678570 systemd[1]: Started sshd@2-49.13.212.147:22-139.178.89.65:35570.service - OpenSSH per-connection server daemon (139.178.89.65:35570). Feb 13 15:36:52.990584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 15:36:52.999963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:53.139803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:53.153162 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:53.206040 kubelet[1831]: E0213 15:36:53.205903 1831 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:53.209213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:53.209451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:53.675265 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 35570 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:36:53.677720 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:53.686056 systemd-logind[1487]: New session 3 of user core. Feb 13 15:36:53.697959 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:36:54.359953 sshd[1839]: Connection closed by 139.178.89.65 port 35570 Feb 13 15:36:54.359839 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:54.364550 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:36:54.364809 systemd[1]: sshd@2-49.13.212.147:22-139.178.89.65:35570.service: Deactivated successfully. Feb 13 15:36:54.366980 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:36:54.370431 systemd-logind[1487]: Removed session 3. Feb 13 15:36:54.530507 systemd[1]: Started sshd@3-49.13.212.147:22-139.178.89.65:35572.service - OpenSSH per-connection server daemon (139.178.89.65:35572). Feb 13 15:36:55.521869 sshd[1844]: Accepted publickey for core from 139.178.89.65 port 35572 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:36:55.523797 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:55.529329 systemd-logind[1487]: New session 4 of user core. Feb 13 15:36:55.537860 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:36:56.206720 sshd[1846]: Connection closed by 139.178.89.65 port 35572 Feb 13 15:36:56.207759 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:56.212801 systemd[1]: sshd@3-49.13.212.147:22-139.178.89.65:35572.service: Deactivated successfully. Feb 13 15:36:56.215805 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:36:56.216881 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:36:56.219033 systemd-logind[1487]: Removed session 4. Feb 13 15:36:56.384114 systemd[1]: Started sshd@4-49.13.212.147:22-139.178.89.65:57014.service - OpenSSH per-connection server daemon (139.178.89.65:57014). Feb 13 15:36:57.380081 sshd[1851]: Accepted publickey for core from 139.178.89.65 port 57014 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:36:57.382345 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:57.390725 systemd-logind[1487]: New session 5 of user core. Feb 13 15:36:57.403924 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:36:57.915966 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:36:57.916313 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:57.930633 sudo[1854]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:58.093632 sshd[1853]: Connection closed by 139.178.89.65 port 57014 Feb 13 15:36:58.092437 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:58.096838 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:36:58.097425 systemd[1]: sshd@4-49.13.212.147:22-139.178.89.65:57014.service: Deactivated successfully. Feb 13 15:36:58.100018 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:36:58.102855 systemd-logind[1487]: Removed session 5. Feb 13 15:36:58.271436 systemd[1]: Started sshd@5-49.13.212.147:22-139.178.89.65:57026.service - OpenSSH per-connection server daemon (139.178.89.65:57026). Feb 13 15:36:59.259937 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 57026 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:36:59.262072 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:59.267819 systemd-logind[1487]: New session 6 of user core. Feb 13 15:36:59.278922 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:36:59.785218 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:36:59.785715 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:59.789845 sudo[1863]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:59.797095 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:36:59.797528 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:59.813062 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:36:59.848169 augenrules[1885]: No rules Feb 13 15:36:59.849695 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:36:59.849944 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:36:59.851811 sudo[1862]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:00.012506 sshd[1861]: Connection closed by 139.178.89.65 port 57026 Feb 13 15:37:00.013174 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:00.018317 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:37:00.018526 systemd[1]: sshd@5-49.13.212.147:22-139.178.89.65:57026.service: Deactivated successfully. Feb 13 15:37:00.020269 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:37:00.021735 systemd-logind[1487]: Removed session 6. Feb 13 15:37:00.194290 systemd[1]: Started sshd@6-49.13.212.147:22-139.178.89.65:57030.service - OpenSSH per-connection server daemon (139.178.89.65:57030). Feb 13 15:37:01.180917 sshd[1893]: Accepted publickey for core from 139.178.89.65 port 57030 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:01.183095 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:01.191099 systemd-logind[1487]: New session 7 of user core. Feb 13 15:37:01.198231 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:37:01.705468 sudo[1896]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:37:01.705868 sudo[1896]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:02.032179 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:37:02.034962 (dockerd)[1914]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:37:02.291390 dockerd[1914]: time="2025-02-13T15:37:02.291213783Z" level=info msg="Starting up" Feb 13 15:37:02.374442 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport867926903-merged.mount: Deactivated successfully. Feb 13 15:37:02.396008 dockerd[1914]: time="2025-02-13T15:37:02.395951570Z" level=info msg="Loading containers: start." Feb 13 15:37:02.578678 kernel: Initializing XFRM netlink socket Feb 13 15:37:02.674324 systemd-networkd[1387]: docker0: Link UP Feb 13 15:37:02.717753 dockerd[1914]: time="2025-02-13T15:37:02.717333626Z" level=info msg="Loading containers: done." Feb 13 15:37:02.733803 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2237946873-merged.mount: Deactivated successfully. Feb 13 15:37:02.737548 dockerd[1914]: time="2025-02-13T15:37:02.737455448Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:37:02.737717 dockerd[1914]: time="2025-02-13T15:37:02.737581290Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:37:02.737855 dockerd[1914]: time="2025-02-13T15:37:02.737810415Z" level=info msg="Daemon has completed initialization" Feb 13 15:37:02.781034 dockerd[1914]: time="2025-02-13T15:37:02.780556825Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:37:02.781672 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:37:03.241129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 15:37:03.251048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:03.360584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:03.366855 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:03.420731 kubelet[2110]: E0213 15:37:03.420621 2110 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:03.424752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:03.424957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:03.896383 containerd[1505]: time="2025-02-13T15:37:03.896333447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:37:04.524830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771265779.mount: Deactivated successfully. Feb 13 15:37:06.245948 containerd[1505]: time="2025-02-13T15:37:06.245863013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:06.248428 containerd[1505]: time="2025-02-13T15:37:06.247747561Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865299" Feb 13 15:37:06.249788 containerd[1505]: time="2025-02-13T15:37:06.249744669Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:06.255566 containerd[1505]: time="2025-02-13T15:37:06.255503392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:06.257803 containerd[1505]: time="2025-02-13T15:37:06.257570059Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.361180291s" Feb 13 15:37:06.257803 containerd[1505]: time="2025-02-13T15:37:06.257644578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:37:06.284499 containerd[1505]: time="2025-02-13T15:37:06.284454488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:37:08.468756 containerd[1505]: time="2025-02-13T15:37:08.468691088Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898614" Feb 13 15:37:08.470815 containerd[1505]: time="2025-02-13T15:37:08.469553563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:08.473228 containerd[1505]: time="2025-02-13T15:37:08.473158585Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:08.474484 containerd[1505]: time="2025-02-13T15:37:08.474291699Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.189792051s" Feb 13 15:37:08.474484 containerd[1505]: time="2025-02-13T15:37:08.474373259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:37:08.475506 containerd[1505]: time="2025-02-13T15:37:08.475263734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:08.500929 containerd[1505]: time="2025-02-13T15:37:08.500896603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:37:09.759609 containerd[1505]: time="2025-02-13T15:37:09.759504333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:09.761633 containerd[1505]: time="2025-02-13T15:37:09.761003647Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164954" Feb 13 15:37:09.762846 containerd[1505]: time="2025-02-13T15:37:09.762784039Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:09.766033 containerd[1505]: time="2025-02-13T15:37:09.765976464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:09.767117 containerd[1505]: time="2025-02-13T15:37:09.767069739Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.265963337s" Feb 13 15:37:09.767117 containerd[1505]: time="2025-02-13T15:37:09.767112219Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:37:09.790979 containerd[1505]: time="2025-02-13T15:37:09.790913671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:37:10.803860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058672586.mount: Deactivated successfully. Feb 13 15:37:11.170729 containerd[1505]: time="2025-02-13T15:37:11.169737965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:11.173656 containerd[1505]: time="2025-02-13T15:37:11.173438192Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663396" Feb 13 15:37:11.185627 containerd[1505]: time="2025-02-13T15:37:11.184384155Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:11.194013 containerd[1505]: time="2025-02-13T15:37:11.193960763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:11.194822 containerd[1505]: time="2025-02-13T15:37:11.194780760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.403814009s" Feb 13 15:37:11.194973 containerd[1505]: time="2025-02-13T15:37:11.194953879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:37:11.218465 containerd[1505]: time="2025-02-13T15:37:11.218423440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:37:11.799783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243079420.mount: Deactivated successfully. Feb 13 15:37:12.404247 containerd[1505]: time="2025-02-13T15:37:12.404169801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:12.405949 containerd[1505]: time="2025-02-13T15:37:12.405873596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:37:12.407750 containerd[1505]: time="2025-02-13T15:37:12.407677391Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:12.411617 containerd[1505]: time="2025-02-13T15:37:12.411415181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:12.413132 containerd[1505]: time="2025-02-13T15:37:12.412534177Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.194062377s" Feb 13 15:37:12.413132 containerd[1505]: time="2025-02-13T15:37:12.412578857Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:37:12.436646 containerd[1505]: time="2025-02-13T15:37:12.436409109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:37:12.969512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870494474.mount: Deactivated successfully. Feb 13 15:37:12.976583 containerd[1505]: time="2025-02-13T15:37:12.976462892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:12.977419 containerd[1505]: time="2025-02-13T15:37:12.977301169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:37:12.978751 containerd[1505]: time="2025-02-13T15:37:12.978678325Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:12.982249 containerd[1505]: time="2025-02-13T15:37:12.982180315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:12.983755 containerd[1505]: time="2025-02-13T15:37:12.983199153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 546.746924ms" Feb 13 15:37:12.983755 containerd[1505]: time="2025-02-13T15:37:12.983239272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:37:13.005846 containerd[1505]: time="2025-02-13T15:37:13.005804410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:37:13.491154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 15:37:13.501865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:13.616999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021338749.mount: Deactivated successfully. Feb 13 15:37:13.669839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:13.680050 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:13.762948 kubelet[2282]: E0213 15:37:13.762672 2282 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:13.767070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:13.767767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:16.175901 containerd[1505]: time="2025-02-13T15:37:16.175830989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:16.177806 containerd[1505]: time="2025-02-13T15:37:16.177300788Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Feb 13 15:37:16.180462 containerd[1505]: time="2025-02-13T15:37:16.179031146Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:16.183911 containerd[1505]: time="2025-02-13T15:37:16.183835622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:16.190541 containerd[1505]: time="2025-02-13T15:37:16.190456056Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.184600686s" Feb 13 15:37:16.190541 containerd[1505]: time="2025-02-13T15:37:16.190532816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:37:21.115993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:21.127145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:21.154818 systemd[1]: Reloading requested from client PID 2395 ('systemctl') (unit session-7.scope)... Feb 13 15:37:21.154835 systemd[1]: Reloading... Feb 13 15:37:21.269833 zram_generator::config[2435]: No configuration found. Feb 13 15:37:21.373847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:37:21.443345 systemd[1]: Reloading finished in 288 ms. Feb 13 15:37:21.499811 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:37:21.500230 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:37:21.500634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:21.507981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:21.620904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:21.626532 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:37:21.679634 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:21.679634 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:37:21.679634 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:21.679634 kubelet[2483]: I0213 15:37:21.679394 2483 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:37:22.482748 kubelet[2483]: I0213 15:37:22.482678 2483 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:37:22.482748 kubelet[2483]: I0213 15:37:22.482730 2483 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:37:22.483339 kubelet[2483]: I0213 15:37:22.483285 2483 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:37:22.503257 kubelet[2483]: I0213 15:37:22.502738 2483 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:37:22.503257 kubelet[2483]: E0213 15:37:22.503146 2483 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.212.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.515054 kubelet[2483]: I0213 15:37:22.515022 2483 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:37:22.515647 kubelet[2483]: I0213 15:37:22.515610 2483 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:37:22.515927 kubelet[2483]: I0213 15:37:22.515737 2483 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-1-3-ffab21d6e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:37:22.516114 kubelet[2483]: I0213 15:37:22.516102 2483 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:37:22.516168 kubelet[2483]: I0213 15:37:22.516161 2483 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:37:22.516777 kubelet[2483]: I0213 15:37:22.516483 2483 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:22.518904 kubelet[2483]: I0213 15:37:22.517928 2483 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:37:22.518904 kubelet[2483]: I0213 15:37:22.517954 2483 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:37:22.518904 kubelet[2483]: I0213 15:37:22.518045 2483 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:37:22.518904 kubelet[2483]: I0213 15:37:22.518152 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:37:22.521403 kubelet[2483]: I0213 15:37:22.521375 2483 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:37:22.521860 kubelet[2483]: I0213 15:37:22.521791 2483 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:37:22.521926 kubelet[2483]: W0213 15:37:22.521912 2483 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:37:22.522810 kubelet[2483]: I0213 15:37:22.522781 2483 server.go:1264] "Started kubelet" Feb 13 15:37:22.522985 kubelet[2483]: W0213 15:37:22.522939 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.212.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-3-ffab21d6e1&limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.523020 kubelet[2483]: E0213 15:37:22.522998 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.212.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-3-ffab21d6e1&limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.526092 kubelet[2483]: I0213 15:37:22.526053 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:37:22.527044 kubelet[2483]: W0213 15:37:22.527000 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.212.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.527905 kubelet[2483]: E0213 15:37:22.527876 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.212.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.531870 kubelet[2483]: I0213 15:37:22.531810 2483 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:37:22.532986 kubelet[2483]: I0213 15:37:22.532944 2483 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:37:22.533951 kubelet[2483]: I0213 15:37:22.533889 2483 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:37:22.534070 kubelet[2483]: I0213 15:37:22.534056 2483 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:37:22.534155 kubelet[2483]: I0213 15:37:22.534133 2483 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:37:22.534215 kubelet[2483]: I0213 15:37:22.533922 2483 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:37:22.536706 kubelet[2483]: I0213 15:37:22.536643 2483 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:37:22.537452 kubelet[2483]: E0213 15:37:22.536749 2483 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.212.147:6443/api/v1/namespaces/default/events\": dial tcp 49.13.212.147:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-1-3-ffab21d6e1.1823ce9a243312bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-1-3-ffab21d6e1,UID:ci-4186-1-1-3-ffab21d6e1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-1-3-ffab21d6e1,},FirstTimestamp:2025-02-13 15:37:22.522755773 +0000 UTC m=+0.891712413,LastTimestamp:2025-02-13 15:37:22.522755773 +0000 UTC m=+0.891712413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-1-3-ffab21d6e1,}" Feb 13 15:37:22.537563 kubelet[2483]: E0213 15:37:22.537545 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.212.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-3-ffab21d6e1?timeout=10s\": dial tcp 49.13.212.147:6443: connect: connection refused" interval="200ms" Feb 13 15:37:22.540657 kubelet[2483]: W0213 15:37:22.540548 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.212.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.540657 kubelet[2483]: E0213 15:37:22.540629 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.212.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.540972 kubelet[2483]: I0213 15:37:22.540921 2483 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:37:22.540972 kubelet[2483]: I0213 15:37:22.540947 2483 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:37:22.541045 kubelet[2483]: I0213 15:37:22.541018 2483 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:37:22.549371 kubelet[2483]: I0213 15:37:22.549228 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:37:22.550317 kubelet[2483]: I0213 15:37:22.550293 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:37:22.550548 kubelet[2483]: I0213 15:37:22.550539 2483 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:37:22.550696 kubelet[2483]: I0213 15:37:22.550685 2483 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:37:22.550797 kubelet[2483]: E0213 15:37:22.550780 2483 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:37:22.557880 kubelet[2483]: W0213 15:37:22.557829 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.212.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.558546 kubelet[2483]: E0213 15:37:22.558046 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.212.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:22.558546 kubelet[2483]: E0213 15:37:22.558322 2483 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:37:22.569487 kubelet[2483]: I0213 15:37:22.569463 2483 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:37:22.569948 kubelet[2483]: I0213 15:37:22.569684 2483 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:37:22.569948 kubelet[2483]: I0213 15:37:22.569708 2483 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:22.572732 kubelet[2483]: I0213 15:37:22.572658 2483 policy_none.go:49] "None policy: Start" Feb 13 15:37:22.573470 kubelet[2483]: I0213 15:37:22.573439 2483 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:37:22.574005 kubelet[2483]: I0213 15:37:22.573672 2483 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:37:22.580061 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:37:22.592976 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:37:22.597342 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:37:22.610720 kubelet[2483]: I0213 15:37:22.610401 2483 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:37:22.611099 kubelet[2483]: I0213 15:37:22.610741 2483 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:37:22.611099 kubelet[2483]: I0213 15:37:22.610912 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:37:22.614141 kubelet[2483]: E0213 15:37:22.614056 2483 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-1-3-ffab21d6e1\" not found" Feb 13 15:37:22.638000 kubelet[2483]: I0213 15:37:22.637960 2483 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.638527 kubelet[2483]: E0213 15:37:22.638482 2483 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.212.147:6443/api/v1/nodes\": dial tcp 49.13.212.147:6443: connect: connection refused" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.651397 kubelet[2483]: I0213 15:37:22.651308 2483 topology_manager.go:215] "Topology Admit Handler" podUID="7c0558940e2b2733902601d47dc5a5dd" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.654102 kubelet[2483]: I0213 15:37:22.654050 2483 topology_manager.go:215] "Topology Admit Handler" podUID="f91082d6a0cdefb7bf1bb55bee5c4813" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.656618 kubelet[2483]: I0213 15:37:22.656453 2483 topology_manager.go:215] "Topology Admit Handler" podUID="7de18ec3dc3a70d7e199d216021c25c5" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.665076 systemd[1]: Created slice kubepods-burstable-podf91082d6a0cdefb7bf1bb55bee5c4813.slice - libcontainer container kubepods-burstable-podf91082d6a0cdefb7bf1bb55bee5c4813.slice. Feb 13 15:37:22.692947 systemd[1]: Created slice kubepods-burstable-pod7c0558940e2b2733902601d47dc5a5dd.slice - libcontainer container kubepods-burstable-pod7c0558940e2b2733902601d47dc5a5dd.slice. Feb 13 15:37:22.715558 systemd[1]: Created slice kubepods-burstable-pod7de18ec3dc3a70d7e199d216021c25c5.slice - libcontainer container kubepods-burstable-pod7de18ec3dc3a70d7e199d216021c25c5.slice. Feb 13 15:37:22.738304 kubelet[2483]: I0213 15:37:22.738157 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.739726 kubelet[2483]: E0213 15:37:22.738743 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.212.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-3-ffab21d6e1?timeout=10s\": dial tcp 49.13.212.147:6443: connect: connection refused" interval="400ms" Feb 13 15:37:22.739726 kubelet[2483]: I0213 15:37:22.739270 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c0558940e2b2733902601d47dc5a5dd-kubeconfig\") pod \"kube-scheduler-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7c0558940e2b2733902601d47dc5a5dd\") " pod="kube-system/kube-scheduler-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.739726 kubelet[2483]: I0213 15:37:22.739381 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f91082d6a0cdefb7bf1bb55bee5c4813-k8s-certs\") pod \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" (UID: \"f91082d6a0cdefb7bf1bb55bee5c4813\") " pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.739726 kubelet[2483]: I0213 15:37:22.739455 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f91082d6a0cdefb7bf1bb55bee5c4813-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" (UID: \"f91082d6a0cdefb7bf1bb55bee5c4813\") " pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.739726 kubelet[2483]: I0213 15:37:22.739513 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.740040 kubelet[2483]: I0213 15:37:22.739546 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.740040 kubelet[2483]: I0213 15:37:22.739574 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.740040 kubelet[2483]: I0213 15:37:22.739615 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f91082d6a0cdefb7bf1bb55bee5c4813-ca-certs\") pod \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" (UID: \"f91082d6a0cdefb7bf1bb55bee5c4813\") " pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.740040 kubelet[2483]: I0213 15:37:22.739641 2483 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-ca-certs\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.841868 kubelet[2483]: I0213 15:37:22.841808 2483 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.842307 kubelet[2483]: E0213 15:37:22.842265 2483 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.212.147:6443/api/v1/nodes\": dial tcp 49.13.212.147:6443: connect: connection refused" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:22.990116 containerd[1505]: time="2025-02-13T15:37:22.989529608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-1-3-ffab21d6e1,Uid:f91082d6a0cdefb7bf1bb55bee5c4813,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:22.997409 containerd[1505]: time="2025-02-13T15:37:22.997329501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-1-3-ffab21d6e1,Uid:7c0558940e2b2733902601d47dc5a5dd,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:23.021221 containerd[1505]: time="2025-02-13T15:37:23.021083589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-1-3-ffab21d6e1,Uid:7de18ec3dc3a70d7e199d216021c25c5,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:23.140288 kubelet[2483]: E0213 15:37:23.140218 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.212.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-3-ffab21d6e1?timeout=10s\": dial tcp 49.13.212.147:6443: connect: connection refused" interval="800ms" Feb 13 15:37:23.246536 kubelet[2483]: I0213 15:37:23.245812 2483 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:23.246536 kubelet[2483]: E0213 15:37:23.246346 2483 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.212.147:6443/api/v1/nodes\": dial tcp 49.13.212.147:6443: connect: connection refused" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:23.522067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531757024.mount: Deactivated successfully. Feb 13 15:37:23.529639 containerd[1505]: time="2025-02-13T15:37:23.529513448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:23.532256 containerd[1505]: time="2025-02-13T15:37:23.532173294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:37:23.535902 containerd[1505]: time="2025-02-13T15:37:23.534811619Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:23.542087 containerd[1505]: time="2025-02-13T15:37:23.542009474Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:23.542538 containerd[1505]: time="2025-02-13T15:37:23.542481195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:37:23.544513 containerd[1505]: time="2025-02-13T15:37:23.544223719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:23.544980 containerd[1505]: time="2025-02-13T15:37:23.544915400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:37:23.546326 containerd[1505]: time="2025-02-13T15:37:23.546229443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:23.553133 containerd[1505]: time="2025-02-13T15:37:23.552163335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.684993ms" Feb 13 15:37:23.555314 containerd[1505]: time="2025-02-13T15:37:23.555098702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.413094ms" Feb 13 15:37:23.556619 containerd[1505]: time="2025-02-13T15:37:23.556507905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.331235ms" Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708135100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708235341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708252021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708332901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708411501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708457501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708473381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:23.709034 containerd[1505]: time="2025-02-13T15:37:23.708938262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:23.713401 containerd[1505]: time="2025-02-13T15:37:23.712945390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:23.713589 containerd[1505]: time="2025-02-13T15:37:23.713429111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:23.713589 containerd[1505]: time="2025-02-13T15:37:23.713506032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:23.716371 containerd[1505]: time="2025-02-13T15:37:23.716299677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:23.740893 systemd[1]: Started cri-containerd-bc48c3deae1daf8903d8abf4c8a766ec269b96144a599c6bd74566280bfddf85.scope - libcontainer container bc48c3deae1daf8903d8abf4c8a766ec269b96144a599c6bd74566280bfddf85. Feb 13 15:37:23.744069 systemd[1]: Started cri-containerd-e8a5bfab6f2dd031865b531922fffd98221b1461ad79b802de706975dde18d52.scope - libcontainer container e8a5bfab6f2dd031865b531922fffd98221b1461ad79b802de706975dde18d52. Feb 13 15:37:23.749774 systemd[1]: Started cri-containerd-deca2bfa31389047a461af2b99f89a1ce616bc7d246c05bf144ccb50fdde166e.scope - libcontainer container deca2bfa31389047a461af2b99f89a1ce616bc7d246c05bf144ccb50fdde166e. Feb 13 15:37:23.757584 kubelet[2483]: W0213 15:37:23.757527 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.212.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-3-ffab21d6e1&limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.758033 kubelet[2483]: E0213 15:37:23.758001 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.212.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-3-ffab21d6e1&limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.812487 containerd[1505]: time="2025-02-13T15:37:23.812373197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-1-3-ffab21d6e1,Uid:f91082d6a0cdefb7bf1bb55bee5c4813,Namespace:kube-system,Attempt:0,} returns sandbox id \"deca2bfa31389047a461af2b99f89a1ce616bc7d246c05bf144ccb50fdde166e\"" Feb 13 15:37:23.813049 containerd[1505]: time="2025-02-13T15:37:23.813020999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-1-3-ffab21d6e1,Uid:7c0558940e2b2733902601d47dc5a5dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc48c3deae1daf8903d8abf4c8a766ec269b96144a599c6bd74566280bfddf85\"" Feb 13 15:37:23.820621 containerd[1505]: time="2025-02-13T15:37:23.818075729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-1-3-ffab21d6e1,Uid:7de18ec3dc3a70d7e199d216021c25c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8a5bfab6f2dd031865b531922fffd98221b1461ad79b802de706975dde18d52\"" Feb 13 15:37:23.822509 containerd[1505]: time="2025-02-13T15:37:23.822383658Z" level=info msg="CreateContainer within sandbox \"deca2bfa31389047a461af2b99f89a1ce616bc7d246c05bf144ccb50fdde166e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:37:23.822727 containerd[1505]: time="2025-02-13T15:37:23.822701459Z" level=info msg="CreateContainer within sandbox \"bc48c3deae1daf8903d8abf4c8a766ec269b96144a599c6bd74566280bfddf85\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:37:23.823239 kubelet[2483]: W0213 15:37:23.823166 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.212.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.823703 kubelet[2483]: E0213 15:37:23.823553 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.212.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.830878 containerd[1505]: time="2025-02-13T15:37:23.830840196Z" level=info msg="CreateContainer within sandbox \"e8a5bfab6f2dd031865b531922fffd98221b1461ad79b802de706975dde18d52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:37:23.847133 containerd[1505]: time="2025-02-13T15:37:23.847083270Z" level=info msg="CreateContainer within sandbox \"deca2bfa31389047a461af2b99f89a1ce616bc7d246c05bf144ccb50fdde166e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0c83dff5297c1253f68467900909daf42e440629c8fdf062ceeef7be2e43bb6d\"" Feb 13 15:37:23.847849 containerd[1505]: time="2025-02-13T15:37:23.847808431Z" level=info msg="StartContainer for \"0c83dff5297c1253f68467900909daf42e440629c8fdf062ceeef7be2e43bb6d\"" Feb 13 15:37:23.848818 containerd[1505]: time="2025-02-13T15:37:23.848127592Z" level=info msg="CreateContainer within sandbox \"bc48c3deae1daf8903d8abf4c8a766ec269b96144a599c6bd74566280bfddf85\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71ee31b9fcce5768673db679913cd9fd353805c1652d85f8a7f013e9cdc28aaf\"" Feb 13 15:37:23.849048 containerd[1505]: time="2025-02-13T15:37:23.848929834Z" level=info msg="StartContainer for \"71ee31b9fcce5768673db679913cd9fd353805c1652d85f8a7f013e9cdc28aaf\"" Feb 13 15:37:23.854382 containerd[1505]: time="2025-02-13T15:37:23.854255205Z" level=info msg="CreateContainer within sandbox \"e8a5bfab6f2dd031865b531922fffd98221b1461ad79b802de706975dde18d52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"451ac3dcd4c391ddfe1a63b8c970446c43508e29824c06377fbfa47a83321248\"" Feb 13 15:37:23.854819 containerd[1505]: time="2025-02-13T15:37:23.854793366Z" level=info msg="StartContainer for \"451ac3dcd4c391ddfe1a63b8c970446c43508e29824c06377fbfa47a83321248\"" Feb 13 15:37:23.883822 kubelet[2483]: W0213 15:37:23.883735 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.212.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.883939 kubelet[2483]: E0213 15:37:23.883892 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.212.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.891803 systemd[1]: Started cri-containerd-71ee31b9fcce5768673db679913cd9fd353805c1652d85f8a7f013e9cdc28aaf.scope - libcontainer container 71ee31b9fcce5768673db679913cd9fd353805c1652d85f8a7f013e9cdc28aaf. Feb 13 15:37:23.899785 systemd[1]: Started cri-containerd-0c83dff5297c1253f68467900909daf42e440629c8fdf062ceeef7be2e43bb6d.scope - libcontainer container 0c83dff5297c1253f68467900909daf42e440629c8fdf062ceeef7be2e43bb6d. Feb 13 15:37:23.909803 systemd[1]: Started cri-containerd-451ac3dcd4c391ddfe1a63b8c970446c43508e29824c06377fbfa47a83321248.scope - libcontainer container 451ac3dcd4c391ddfe1a63b8c970446c43508e29824c06377fbfa47a83321248. Feb 13 15:37:23.917239 kubelet[2483]: W0213 15:37:23.917103 2483 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.212.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.917239 kubelet[2483]: E0213 15:37:23.917176 2483 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.212.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.212.147:6443: connect: connection refused Feb 13 15:37:23.941162 kubelet[2483]: E0213 15:37:23.941106 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.212.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-3-ffab21d6e1?timeout=10s\": dial tcp 49.13.212.147:6443: connect: connection refused" interval="1.6s" Feb 13 15:37:23.974034 containerd[1505]: time="2025-02-13T15:37:23.973978014Z" level=info msg="StartContainer for \"71ee31b9fcce5768673db679913cd9fd353805c1652d85f8a7f013e9cdc28aaf\" returns successfully" Feb 13 15:37:23.974172 containerd[1505]: time="2025-02-13T15:37:23.974135774Z" level=info msg="StartContainer for \"0c83dff5297c1253f68467900909daf42e440629c8fdf062ceeef7be2e43bb6d\" returns successfully" Feb 13 15:37:23.979914 containerd[1505]: time="2025-02-13T15:37:23.979864106Z" level=info msg="StartContainer for \"451ac3dcd4c391ddfe1a63b8c970446c43508e29824c06377fbfa47a83321248\" returns successfully" Feb 13 15:37:24.049394 kubelet[2483]: I0213 15:37:24.049331 2483 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:24.051024 kubelet[2483]: E0213 15:37:24.050980 2483 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.212.147:6443/api/v1/nodes\": dial tcp 49.13.212.147:6443: connect: connection refused" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:25.655627 kubelet[2483]: I0213 15:37:25.653655 2483 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:26.814291 kubelet[2483]: E0213 15:37:26.814232 2483 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-1-3-ffab21d6e1\" not found" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:26.956089 kubelet[2483]: I0213 15:37:26.955929 2483 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:27.529395 kubelet[2483]: I0213 15:37:27.529106 2483 apiserver.go:52] "Watching apiserver" Feb 13 15:37:27.534256 kubelet[2483]: I0213 15:37:27.534219 2483 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:37:29.074893 systemd[1]: Reloading requested from client PID 2756 ('systemctl') (unit session-7.scope)... Feb 13 15:37:29.074908 systemd[1]: Reloading... Feb 13 15:37:29.172617 zram_generator::config[2796]: No configuration found. Feb 13 15:37:29.273830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:37:29.357082 systemd[1]: Reloading finished in 281 ms. Feb 13 15:37:29.404488 kubelet[2483]: E0213 15:37:29.403808 2483 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4186-1-1-3-ffab21d6e1.1823ce9a243312bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-1-3-ffab21d6e1,UID:ci-4186-1-1-3-ffab21d6e1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-1-3-ffab21d6e1,},FirstTimestamp:2025-02-13 15:37:22.522755773 +0000 UTC m=+0.891712413,LastTimestamp:2025-02-13 15:37:22.522755773 +0000 UTC m=+0.891712413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-1-3-ffab21d6e1,}" Feb 13 15:37:29.404985 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:29.420274 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:37:29.420559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:29.421770 systemd[1]: kubelet.service: Consumed 1.311s CPU time, 113.0M memory peak, 0B memory swap peak. Feb 13 15:37:29.434199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:29.564800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:29.580499 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:37:29.640565 kubelet[2841]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:29.640565 kubelet[2841]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:37:29.640565 kubelet[2841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:29.640565 kubelet[2841]: I0213 15:37:29.637776 2841 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:37:29.646015 kubelet[2841]: I0213 15:37:29.643698 2841 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:37:29.646015 kubelet[2841]: I0213 15:37:29.643724 2841 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:37:29.646015 kubelet[2841]: I0213 15:37:29.643985 2841 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:37:29.646015 kubelet[2841]: I0213 15:37:29.645540 2841 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:37:29.648575 kubelet[2841]: I0213 15:37:29.648527 2841 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:37:29.655088 kubelet[2841]: I0213 15:37:29.655037 2841 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:37:29.655348 kubelet[2841]: I0213 15:37:29.655302 2841 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:37:29.655650 kubelet[2841]: I0213 15:37:29.655341 2841 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-1-3-ffab21d6e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:37:29.655650 kubelet[2841]: I0213 15:37:29.655628 2841 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:37:29.655650 kubelet[2841]: I0213 15:37:29.655638 2841 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:37:29.655819 kubelet[2841]: I0213 15:37:29.655687 2841 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:29.655819 kubelet[2841]: I0213 15:37:29.655811 2841 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:37:29.655865 kubelet[2841]: I0213 15:37:29.655824 2841 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:37:29.655865 kubelet[2841]: I0213 15:37:29.655855 2841 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:37:29.655910 kubelet[2841]: I0213 15:37:29.655865 2841 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:37:29.661375 kubelet[2841]: I0213 15:37:29.661345 2841 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:37:29.661731 kubelet[2841]: I0213 15:37:29.661716 2841 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:37:29.662987 kubelet[2841]: I0213 15:37:29.662739 2841 server.go:1264] "Started kubelet" Feb 13 15:37:29.663979 kubelet[2841]: I0213 15:37:29.663823 2841 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:37:29.664806 kubelet[2841]: I0213 15:37:29.664529 2841 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:37:29.665657 kubelet[2841]: I0213 15:37:29.665405 2841 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:37:29.665921 kubelet[2841]: I0213 15:37:29.664941 2841 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:37:29.666211 kubelet[2841]: I0213 15:37:29.666108 2841 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:37:29.680378 kubelet[2841]: I0213 15:37:29.679745 2841 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:37:29.682245 kubelet[2841]: I0213 15:37:29.682204 2841 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:37:29.682397 kubelet[2841]: I0213 15:37:29.682379 2841 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:37:29.688610 kubelet[2841]: I0213 15:37:29.686664 2841 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:37:29.688610 kubelet[2841]: I0213 15:37:29.687860 2841 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:37:29.688610 kubelet[2841]: I0213 15:37:29.687897 2841 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:37:29.688610 kubelet[2841]: I0213 15:37:29.687914 2841 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:37:29.688610 kubelet[2841]: E0213 15:37:29.687954 2841 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:37:29.699989 kubelet[2841]: I0213 15:37:29.699940 2841 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:37:29.700119 kubelet[2841]: I0213 15:37:29.700063 2841 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:37:29.704673 kubelet[2841]: I0213 15:37:29.704646 2841 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:37:29.765522 kubelet[2841]: I0213 15:37:29.765493 2841 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:37:29.765743 kubelet[2841]: I0213 15:37:29.765726 2841 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:37:29.765889 kubelet[2841]: I0213 15:37:29.765879 2841 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:29.766132 kubelet[2841]: I0213 15:37:29.766115 2841 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:37:29.766243 kubelet[2841]: I0213 15:37:29.766215 2841 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:37:29.766294 kubelet[2841]: I0213 15:37:29.766287 2841 policy_none.go:49] "None policy: Start" Feb 13 15:37:29.767685 kubelet[2841]: I0213 15:37:29.767213 2841 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:37:29.767685 kubelet[2841]: I0213 15:37:29.767243 2841 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:37:29.767685 kubelet[2841]: I0213 15:37:29.767401 2841 state_mem.go:75] "Updated machine memory state" Feb 13 15:37:29.772696 kubelet[2841]: I0213 15:37:29.772652 2841 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:37:29.772903 kubelet[2841]: I0213 15:37:29.772865 2841 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:37:29.773019 kubelet[2841]: I0213 15:37:29.772988 2841 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:37:29.787615 kubelet[2841]: I0213 15:37:29.787563 2841 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.790705 kubelet[2841]: I0213 15:37:29.788599 2841 topology_manager.go:215] "Topology Admit Handler" podUID="f91082d6a0cdefb7bf1bb55bee5c4813" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.790705 kubelet[2841]: I0213 15:37:29.788859 2841 topology_manager.go:215] "Topology Admit Handler" podUID="7de18ec3dc3a70d7e199d216021c25c5" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.790705 kubelet[2841]: I0213 15:37:29.788903 2841 topology_manager.go:215] "Topology Admit Handler" podUID="7c0558940e2b2733902601d47dc5a5dd" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.801276 kubelet[2841]: E0213 15:37:29.800042 2841 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.804117 kubelet[2841]: I0213 15:37:29.804088 2841 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.804993 kubelet[2841]: I0213 15:37:29.804972 2841 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.806684 kubelet[2841]: E0213 15:37:29.806308 2841 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-1-3-ffab21d6e1\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.983742 kubelet[2841]: I0213 15:37:29.983514 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-ca-certs\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.983742 kubelet[2841]: I0213 15:37:29.983585 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.983742 kubelet[2841]: I0213 15:37:29.983629 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.983742 kubelet[2841]: I0213 15:37:29.983653 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.983742 kubelet[2841]: I0213 15:37:29.983677 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7de18ec3dc3a70d7e199d216021c25c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7de18ec3dc3a70d7e199d216021c25c5\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.984812 kubelet[2841]: I0213 15:37:29.983703 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f91082d6a0cdefb7bf1bb55bee5c4813-ca-certs\") pod \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" (UID: \"f91082d6a0cdefb7bf1bb55bee5c4813\") " pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.984812 kubelet[2841]: I0213 15:37:29.983735 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f91082d6a0cdefb7bf1bb55bee5c4813-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" (UID: \"f91082d6a0cdefb7bf1bb55bee5c4813\") " pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.984812 kubelet[2841]: I0213 15:37:29.983755 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f91082d6a0cdefb7bf1bb55bee5c4813-k8s-certs\") pod \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" (UID: \"f91082d6a0cdefb7bf1bb55bee5c4813\") " pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:29.984812 kubelet[2841]: I0213 15:37:29.983787 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c0558940e2b2733902601d47dc5a5dd-kubeconfig\") pod \"kube-scheduler-ci-4186-1-1-3-ffab21d6e1\" (UID: \"7c0558940e2b2733902601d47dc5a5dd\") " pod="kube-system/kube-scheduler-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:30.087853 sudo[2874]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:37:30.088247 sudo[2874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:37:30.555001 sudo[2874]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:30.661657 kubelet[2841]: I0213 15:37:30.661437 2841 apiserver.go:52] "Watching apiserver" Feb 13 15:37:30.682999 kubelet[2841]: I0213 15:37:30.682894 2841 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:37:30.754993 kubelet[2841]: E0213 15:37:30.754893 2841 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-1-3-ffab21d6e1\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" Feb 13 15:37:30.782841 kubelet[2841]: I0213 15:37:30.782173 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-1-3-ffab21d6e1" podStartSLOduration=1.7821219 podStartE2EDuration="1.7821219s" podCreationTimestamp="2025-02-13 15:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:30.780765054 +0000 UTC m=+1.195511574" watchObservedRunningTime="2025-02-13 15:37:30.7821219 +0000 UTC m=+1.196868420" Feb 13 15:37:30.803403 kubelet[2841]: I0213 15:37:30.803269 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-1-3-ffab21d6e1" podStartSLOduration=1.8032413539999999 podStartE2EDuration="1.803241354s" podCreationTimestamp="2025-02-13 15:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:30.791806303 +0000 UTC m=+1.206552823" watchObservedRunningTime="2025-02-13 15:37:30.803241354 +0000 UTC m=+1.217987914" Feb 13 15:37:30.817940 kubelet[2841]: I0213 15:37:30.817326 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-1-3-ffab21d6e1" podStartSLOduration=1.817308616 podStartE2EDuration="1.817308616s" podCreationTimestamp="2025-02-13 15:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:30.80448624 +0000 UTC m=+1.219232800" watchObservedRunningTime="2025-02-13 15:37:30.817308616 +0000 UTC m=+1.232055096" Feb 13 15:37:32.543408 sudo[1896]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:32.702343 sshd[1895]: Connection closed by 139.178.89.65 port 57030 Feb 13 15:37:32.703689 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:32.710029 systemd[1]: sshd@6-49.13.212.147:22-139.178.89.65:57030.service: Deactivated successfully. Feb 13 15:37:32.714063 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:37:32.715768 systemd[1]: session-7.scope: Consumed 7.243s CPU time, 187.6M memory peak, 0B memory swap peak. Feb 13 15:37:32.716805 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:37:32.718393 systemd-logind[1487]: Removed session 7. Feb 13 15:37:43.705623 kubelet[2841]: I0213 15:37:43.704945 2841 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:37:43.706086 containerd[1505]: time="2025-02-13T15:37:43.705787347Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:37:43.706716 kubelet[2841]: I0213 15:37:43.706577 2841 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:37:44.700761 kubelet[2841]: I0213 15:37:44.700702 2841 topology_manager.go:215] "Topology Admit Handler" podUID="ee97801b-24fc-457d-8328-9429373f5463" podNamespace="kube-system" podName="kube-proxy-ttsd2" Feb 13 15:37:44.711113 systemd[1]: Created slice kubepods-besteffort-podee97801b_24fc_457d_8328_9429373f5463.slice - libcontainer container kubepods-besteffort-podee97801b_24fc_457d_8328_9429373f5463.slice. Feb 13 15:37:44.723537 kubelet[2841]: I0213 15:37:44.722705 2841 topology_manager.go:215] "Topology Admit Handler" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" podNamespace="kube-system" podName="cilium-wptrj" Feb 13 15:37:44.731683 systemd[1]: Created slice kubepods-burstable-pod954e233d_23e5_4ebe_8a22_329fae13f492.slice - libcontainer container kubepods-burstable-pod954e233d_23e5_4ebe_8a22_329fae13f492.slice. Feb 13 15:37:44.785214 kubelet[2841]: I0213 15:37:44.785149 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-xtables-lock\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785214 kubelet[2841]: I0213 15:37:44.785208 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-kernel\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785374 kubelet[2841]: I0213 15:37:44.785228 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-hostproc\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785374 kubelet[2841]: I0213 15:37:44.785255 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cni-path\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785374 kubelet[2841]: I0213 15:37:44.785271 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-net\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785374 kubelet[2841]: I0213 15:37:44.785285 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-run\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785374 kubelet[2841]: I0213 15:37:44.785301 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-lib-modules\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785374 kubelet[2841]: I0213 15:37:44.785315 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/954e233d-23e5-4ebe-8a22-329fae13f492-clustermesh-secrets\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785565 kubelet[2841]: I0213 15:37:44.785339 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4f2\" (UniqueName: \"kubernetes.io/projected/ee97801b-24fc-457d-8328-9429373f5463-kube-api-access-6b4f2\") pod \"kube-proxy-ttsd2\" (UID: \"ee97801b-24fc-457d-8328-9429373f5463\") " pod="kube-system/kube-proxy-ttsd2" Feb 13 15:37:44.785565 kubelet[2841]: I0213 15:37:44.785356 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-config-path\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785565 kubelet[2841]: I0213 15:37:44.785370 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee97801b-24fc-457d-8328-9429373f5463-kube-proxy\") pod \"kube-proxy-ttsd2\" (UID: \"ee97801b-24fc-457d-8328-9429373f5463\") " pod="kube-system/kube-proxy-ttsd2" Feb 13 15:37:44.785565 kubelet[2841]: I0213 15:37:44.785385 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-hubble-tls\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785565 kubelet[2841]: I0213 15:37:44.785401 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee97801b-24fc-457d-8328-9429373f5463-xtables-lock\") pod \"kube-proxy-ttsd2\" (UID: \"ee97801b-24fc-457d-8328-9429373f5463\") " pod="kube-system/kube-proxy-ttsd2" Feb 13 15:37:44.785738 kubelet[2841]: I0213 15:37:44.785417 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee97801b-24fc-457d-8328-9429373f5463-lib-modules\") pod \"kube-proxy-ttsd2\" (UID: \"ee97801b-24fc-457d-8328-9429373f5463\") " pod="kube-system/kube-proxy-ttsd2" Feb 13 15:37:44.785738 kubelet[2841]: I0213 15:37:44.785430 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-bpf-maps\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785738 kubelet[2841]: I0213 15:37:44.785471 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8pb\" (UniqueName: \"kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-kube-api-access-2r8pb\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785738 kubelet[2841]: I0213 15:37:44.785487 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-cgroup\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.785738 kubelet[2841]: I0213 15:37:44.785505 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-etc-cni-netd\") pod \"cilium-wptrj\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " pod="kube-system/cilium-wptrj" Feb 13 15:37:44.797922 kubelet[2841]: I0213 15:37:44.797872 2841 topology_manager.go:215] "Topology Admit Handler" podUID="63551fa6-a653-407e-b2ba-80bb5f391b3f" podNamespace="kube-system" podName="cilium-operator-599987898-xrf8g" Feb 13 15:37:44.806087 systemd[1]: Created slice kubepods-besteffort-pod63551fa6_a653_407e_b2ba_80bb5f391b3f.slice - libcontainer container kubepods-besteffort-pod63551fa6_a653_407e_b2ba_80bb5f391b3f.slice. Feb 13 15:37:44.887050 kubelet[2841]: I0213 15:37:44.885989 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwb29\" (UniqueName: \"kubernetes.io/projected/63551fa6-a653-407e-b2ba-80bb5f391b3f-kube-api-access-wwb29\") pod \"cilium-operator-599987898-xrf8g\" (UID: \"63551fa6-a653-407e-b2ba-80bb5f391b3f\") " pod="kube-system/cilium-operator-599987898-xrf8g" Feb 13 15:37:44.889385 kubelet[2841]: I0213 15:37:44.889321 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63551fa6-a653-407e-b2ba-80bb5f391b3f-cilium-config-path\") pod \"cilium-operator-599987898-xrf8g\" (UID: \"63551fa6-a653-407e-b2ba-80bb5f391b3f\") " pod="kube-system/cilium-operator-599987898-xrf8g" Feb 13 15:37:45.021728 containerd[1505]: time="2025-02-13T15:37:45.021590146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ttsd2,Uid:ee97801b-24fc-457d-8328-9429373f5463,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:45.037779 containerd[1505]: time="2025-02-13T15:37:45.037107951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wptrj,Uid:954e233d-23e5-4ebe-8a22-329fae13f492,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:45.052765 containerd[1505]: time="2025-02-13T15:37:45.051748668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:45.052765 containerd[1505]: time="2025-02-13T15:37:45.052313872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:45.052765 containerd[1505]: time="2025-02-13T15:37:45.052328312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.052976 containerd[1505]: time="2025-02-13T15:37:45.052805436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.074968 systemd[1]: Started cri-containerd-9316232f4b09b95470e5403e433d8f42f87acdce1f616058aced496b655a03dc.scope - libcontainer container 9316232f4b09b95470e5403e433d8f42f87acdce1f616058aced496b655a03dc. Feb 13 15:37:45.080869 containerd[1505]: time="2025-02-13T15:37:45.080757260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:45.081001 containerd[1505]: time="2025-02-13T15:37:45.080887301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:45.081001 containerd[1505]: time="2025-02-13T15:37:45.080921142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.081079 containerd[1505]: time="2025-02-13T15:37:45.081042423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.104831 systemd[1]: Started cri-containerd-bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d.scope - libcontainer container bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d. Feb 13 15:37:45.110187 containerd[1505]: time="2025-02-13T15:37:45.110123856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xrf8g,Uid:63551fa6-a653-407e-b2ba-80bb5f391b3f,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:45.123619 containerd[1505]: time="2025-02-13T15:37:45.123563803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ttsd2,Uid:ee97801b-24fc-457d-8328-9429373f5463,Namespace:kube-system,Attempt:0,} returns sandbox id \"9316232f4b09b95470e5403e433d8f42f87acdce1f616058aced496b655a03dc\"" Feb 13 15:37:45.132788 containerd[1505]: time="2025-02-13T15:37:45.132739597Z" level=info msg="CreateContainer within sandbox \"9316232f4b09b95470e5403e433d8f42f87acdce1f616058aced496b655a03dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:37:45.155985 containerd[1505]: time="2025-02-13T15:37:45.155947903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wptrj,Uid:954e233d-23e5-4ebe-8a22-329fae13f492,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\"" Feb 13 15:37:45.159004 containerd[1505]: time="2025-02-13T15:37:45.158628364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:45.159004 containerd[1505]: time="2025-02-13T15:37:45.158760165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:45.159004 containerd[1505]: time="2025-02-13T15:37:45.158804606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.160584 containerd[1505]: time="2025-02-13T15:37:45.159412490Z" level=info msg="CreateContainer within sandbox \"9316232f4b09b95470e5403e433d8f42f87acdce1f616058aced496b655a03dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ed760805247051a84f25a6104262572651b711cce8ea39333382c03e542f8e9\"" Feb 13 15:37:45.160584 containerd[1505]: time="2025-02-13T15:37:45.160339538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.161581 containerd[1505]: time="2025-02-13T15:37:45.161533587Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:37:45.161886 containerd[1505]: time="2025-02-13T15:37:45.161853110Z" level=info msg="StartContainer for \"3ed760805247051a84f25a6104262572651b711cce8ea39333382c03e542f8e9\"" Feb 13 15:37:45.187908 systemd[1]: Started cri-containerd-20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a.scope - libcontainer container 20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a. Feb 13 15:37:45.213894 systemd[1]: Started cri-containerd-3ed760805247051a84f25a6104262572651b711cce8ea39333382c03e542f8e9.scope - libcontainer container 3ed760805247051a84f25a6104262572651b711cce8ea39333382c03e542f8e9. Feb 13 15:37:45.259478 containerd[1505]: time="2025-02-13T15:37:45.259210370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xrf8g,Uid:63551fa6-a653-407e-b2ba-80bb5f391b3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\"" Feb 13 15:37:45.277130 containerd[1505]: time="2025-02-13T15:37:45.276764591Z" level=info msg="StartContainer for \"3ed760805247051a84f25a6104262572651b711cce8ea39333382c03e542f8e9\" returns successfully" Feb 13 15:37:45.801469 kubelet[2841]: I0213 15:37:45.800879 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ttsd2" podStartSLOduration=1.80085519 podStartE2EDuration="1.80085519s" podCreationTimestamp="2025-02-13 15:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:45.800531627 +0000 UTC m=+16.215278147" watchObservedRunningTime="2025-02-13 15:37:45.80085519 +0000 UTC m=+16.215601710" Feb 13 15:37:53.265281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760290197.mount: Deactivated successfully. Feb 13 15:37:54.750991 containerd[1505]: time="2025-02-13T15:37:54.750930891Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:54.752735 containerd[1505]: time="2025-02-13T15:37:54.752575307Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:37:54.753645 containerd[1505]: time="2025-02-13T15:37:54.753529156Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:54.756146 containerd[1505]: time="2025-02-13T15:37:54.756010899Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.594428871s" Feb 13 15:37:54.756146 containerd[1505]: time="2025-02-13T15:37:54.756051180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:37:54.760427 containerd[1505]: time="2025-02-13T15:37:54.760173419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:37:54.761329 containerd[1505]: time="2025-02-13T15:37:54.761292709Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:37:54.781109 containerd[1505]: time="2025-02-13T15:37:54.780925695Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\"" Feb 13 15:37:54.781551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567847490.mount: Deactivated successfully. Feb 13 15:37:54.784627 containerd[1505]: time="2025-02-13T15:37:54.784271487Z" level=info msg="StartContainer for \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\"" Feb 13 15:37:54.825848 systemd[1]: Started cri-containerd-d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7.scope - libcontainer container d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7. Feb 13 15:37:54.861069 containerd[1505]: time="2025-02-13T15:37:54.860531849Z" level=info msg="StartContainer for \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\" returns successfully" Feb 13 15:37:54.876318 systemd[1]: cri-containerd-d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7.scope: Deactivated successfully. Feb 13 15:37:55.083908 containerd[1505]: time="2025-02-13T15:37:55.083103368Z" level=info msg="shim disconnected" id=d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7 namespace=k8s.io Feb 13 15:37:55.083908 containerd[1505]: time="2025-02-13T15:37:55.083164009Z" level=warning msg="cleaning up after shim disconnected" id=d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7 namespace=k8s.io Feb 13 15:37:55.083908 containerd[1505]: time="2025-02-13T15:37:55.083173649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:55.773700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7-rootfs.mount: Deactivated successfully. Feb 13 15:37:55.831204 containerd[1505]: time="2025-02-13T15:37:55.830078785Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:37:55.857386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242397052.mount: Deactivated successfully. Feb 13 15:37:55.866831 containerd[1505]: time="2025-02-13T15:37:55.866776338Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\"" Feb 13 15:37:55.867769 containerd[1505]: time="2025-02-13T15:37:55.867701266Z" level=info msg="StartContainer for \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\"" Feb 13 15:37:55.925815 systemd[1]: Started cri-containerd-681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9.scope - libcontainer container 681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9. Feb 13 15:37:55.973762 containerd[1505]: time="2025-02-13T15:37:55.973302281Z" level=info msg="StartContainer for \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\" returns successfully" Feb 13 15:37:55.983748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:37:55.984071 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:55.984158 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:55.989966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:55.990151 systemd[1]: cri-containerd-681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9.scope: Deactivated successfully. Feb 13 15:37:56.018900 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:56.029848 containerd[1505]: time="2025-02-13T15:37:56.029587066Z" level=info msg="shim disconnected" id=681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9 namespace=k8s.io Feb 13 15:37:56.029848 containerd[1505]: time="2025-02-13T15:37:56.029711907Z" level=warning msg="cleaning up after shim disconnected" id=681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9 namespace=k8s.io Feb 13 15:37:56.029848 containerd[1505]: time="2025-02-13T15:37:56.029720267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:56.663305 containerd[1505]: time="2025-02-13T15:37:56.663249278Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:56.664358 containerd[1505]: time="2025-02-13T15:37:56.664036686Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:37:56.665642 containerd[1505]: time="2025-02-13T15:37:56.665295858Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:56.666998 containerd[1505]: time="2025-02-13T15:37:56.666954234Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.906726015s" Feb 13 15:37:56.666998 containerd[1505]: time="2025-02-13T15:37:56.666996555Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:37:56.673714 containerd[1505]: time="2025-02-13T15:37:56.673638179Z" level=info msg="CreateContainer within sandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:37:56.692631 containerd[1505]: time="2025-02-13T15:37:56.692401362Z" level=info msg="CreateContainer within sandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\"" Feb 13 15:37:56.693375 containerd[1505]: time="2025-02-13T15:37:56.693132449Z" level=info msg="StartContainer for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\"" Feb 13 15:37:56.717809 systemd[1]: Started cri-containerd-1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8.scope - libcontainer container 1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8. Feb 13 15:37:56.749890 containerd[1505]: time="2025-02-13T15:37:56.749752481Z" level=info msg="StartContainer for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" returns successfully" Feb 13 15:37:56.777547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9-rootfs.mount: Deactivated successfully. Feb 13 15:37:56.834170 containerd[1505]: time="2025-02-13T15:37:56.833994822Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:37:56.862730 containerd[1505]: time="2025-02-13T15:37:56.862363338Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\"" Feb 13 15:37:56.864895 containerd[1505]: time="2025-02-13T15:37:56.863670711Z" level=info msg="StartContainer for \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\"" Feb 13 15:37:56.903729 kubelet[2841]: I0213 15:37:56.903666 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xrf8g" podStartSLOduration=1.496720001 podStartE2EDuration="12.90364478s" podCreationTimestamp="2025-02-13 15:37:44 +0000 UTC" firstStartedPulling="2025-02-13 15:37:45.261146666 +0000 UTC m=+15.675893186" lastFinishedPulling="2025-02-13 15:37:56.668071445 +0000 UTC m=+27.082817965" observedRunningTime="2025-02-13 15:37:56.852040917 +0000 UTC m=+27.266787437" watchObservedRunningTime="2025-02-13 15:37:56.90364478 +0000 UTC m=+27.318391340" Feb 13 15:37:56.922771 systemd[1]: Started cri-containerd-ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024.scope - libcontainer container ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024. Feb 13 15:37:56.990320 containerd[1505]: time="2025-02-13T15:37:56.989931621Z" level=info msg="StartContainer for \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\" returns successfully" Feb 13 15:37:57.008785 systemd[1]: cri-containerd-ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024.scope: Deactivated successfully. Feb 13 15:37:57.091290 containerd[1505]: time="2025-02-13T15:37:57.091066297Z" level=info msg="shim disconnected" id=ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024 namespace=k8s.io Feb 13 15:37:57.091290 containerd[1505]: time="2025-02-13T15:37:57.091122138Z" level=warning msg="cleaning up after shim disconnected" id=ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024 namespace=k8s.io Feb 13 15:37:57.091290 containerd[1505]: time="2025-02-13T15:37:57.091131538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:57.777884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024-rootfs.mount: Deactivated successfully. Feb 13 15:37:57.843860 containerd[1505]: time="2025-02-13T15:37:57.843784207Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:37:57.864653 containerd[1505]: time="2025-02-13T15:37:57.863950606Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\"" Feb 13 15:37:57.866994 containerd[1505]: time="2025-02-13T15:37:57.866229789Z" level=info msg="StartContainer for \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\"" Feb 13 15:37:57.905859 systemd[1]: Started cri-containerd-f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37.scope - libcontainer container f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37. Feb 13 15:37:57.936079 systemd[1]: cri-containerd-f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37.scope: Deactivated successfully. Feb 13 15:37:57.940504 containerd[1505]: time="2025-02-13T15:37:57.939802755Z" level=info msg="StartContainer for \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\" returns successfully" Feb 13 15:37:57.965528 containerd[1505]: time="2025-02-13T15:37:57.965416528Z" level=info msg="shim disconnected" id=f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37 namespace=k8s.io Feb 13 15:37:57.966131 containerd[1505]: time="2025-02-13T15:37:57.966003453Z" level=warning msg="cleaning up after shim disconnected" id=f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37 namespace=k8s.io Feb 13 15:37:57.966581 containerd[1505]: time="2025-02-13T15:37:57.966315217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:58.777533 systemd[1]: run-containerd-runc-k8s.io-f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37-runc.jjm7z2.mount: Deactivated successfully. Feb 13 15:37:58.778753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37-rootfs.mount: Deactivated successfully. Feb 13 15:37:58.853063 containerd[1505]: time="2025-02-13T15:37:58.852904874Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:37:58.884513 containerd[1505]: time="2025-02-13T15:37:58.883137457Z" level=info msg="CreateContainer within sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\"" Feb 13 15:37:58.884513 containerd[1505]: time="2025-02-13T15:37:58.883768023Z" level=info msg="StartContainer for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\"" Feb 13 15:37:58.925809 systemd[1]: Started cri-containerd-f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085.scope - libcontainer container f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085. Feb 13 15:37:58.963498 containerd[1505]: time="2025-02-13T15:37:58.963367899Z" level=info msg="StartContainer for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" returns successfully" Feb 13 15:37:59.092825 kubelet[2841]: I0213 15:37:59.092717 2841 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:37:59.121535 kubelet[2841]: I0213 15:37:59.121484 2841 topology_manager.go:215] "Topology Admit Handler" podUID="09d2cdf9-1f64-4373-bb89-837d5cfeb94e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bscjt" Feb 13 15:37:59.127698 kubelet[2841]: I0213 15:37:59.126915 2841 topology_manager.go:215] "Topology Admit Handler" podUID="c714186a-d2d0-4a1e-aa06-71d7a065d6b8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9d2dj" Feb 13 15:37:59.132833 systemd[1]: Created slice kubepods-burstable-pod09d2cdf9_1f64_4373_bb89_837d5cfeb94e.slice - libcontainer container kubepods-burstable-pod09d2cdf9_1f64_4373_bb89_837d5cfeb94e.slice. Feb 13 15:37:59.142816 systemd[1]: Created slice kubepods-burstable-podc714186a_d2d0_4a1e_aa06_71d7a065d6b8.slice - libcontainer container kubepods-burstable-podc714186a_d2d0_4a1e_aa06_71d7a065d6b8.slice. Feb 13 15:37:59.195917 kubelet[2841]: I0213 15:37:59.195721 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25lw4\" (UniqueName: \"kubernetes.io/projected/c714186a-d2d0-4a1e-aa06-71d7a065d6b8-kube-api-access-25lw4\") pod \"coredns-7db6d8ff4d-9d2dj\" (UID: \"c714186a-d2d0-4a1e-aa06-71d7a065d6b8\") " pod="kube-system/coredns-7db6d8ff4d-9d2dj" Feb 13 15:37:59.195917 kubelet[2841]: I0213 15:37:59.195799 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09d2cdf9-1f64-4373-bb89-837d5cfeb94e-config-volume\") pod \"coredns-7db6d8ff4d-bscjt\" (UID: \"09d2cdf9-1f64-4373-bb89-837d5cfeb94e\") " pod="kube-system/coredns-7db6d8ff4d-bscjt" Feb 13 15:37:59.195917 kubelet[2841]: I0213 15:37:59.195836 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c714186a-d2d0-4a1e-aa06-71d7a065d6b8-config-volume\") pod \"coredns-7db6d8ff4d-9d2dj\" (UID: \"c714186a-d2d0-4a1e-aa06-71d7a065d6b8\") " pod="kube-system/coredns-7db6d8ff4d-9d2dj" Feb 13 15:37:59.195917 kubelet[2841]: I0213 15:37:59.195859 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmb8\" (UniqueName: \"kubernetes.io/projected/09d2cdf9-1f64-4373-bb89-837d5cfeb94e-kube-api-access-gpmb8\") pod \"coredns-7db6d8ff4d-bscjt\" (UID: \"09d2cdf9-1f64-4373-bb89-837d5cfeb94e\") " pod="kube-system/coredns-7db6d8ff4d-bscjt" Feb 13 15:37:59.438959 containerd[1505]: time="2025-02-13T15:37:59.438555542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bscjt,Uid:09d2cdf9-1f64-4373-bb89-837d5cfeb94e,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:59.449618 containerd[1505]: time="2025-02-13T15:37:59.449459932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9d2dj,Uid:c714186a-d2d0-4a1e-aa06-71d7a065d6b8,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:59.880656 kubelet[2841]: I0213 15:37:59.880549 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wptrj" podStartSLOduration=6.281341701 podStartE2EDuration="15.880525653s" podCreationTimestamp="2025-02-13 15:37:44 +0000 UTC" firstStartedPulling="2025-02-13 15:37:45.15813772 +0000 UTC m=+15.572884240" lastFinishedPulling="2025-02-13 15:37:54.757321672 +0000 UTC m=+25.172068192" observedRunningTime="2025-02-13 15:37:59.878733395 +0000 UTC m=+30.293479915" watchObservedRunningTime="2025-02-13 15:37:59.880525653 +0000 UTC m=+30.295272213" Feb 13 15:38:01.161542 systemd-networkd[1387]: cilium_host: Link UP Feb 13 15:38:01.163638 systemd-networkd[1387]: cilium_net: Link UP Feb 13 15:38:01.164475 systemd-networkd[1387]: cilium_net: Gained carrier Feb 13 15:38:01.165191 systemd-networkd[1387]: cilium_host: Gained carrier Feb 13 15:38:01.292862 systemd-networkd[1387]: cilium_vxlan: Link UP Feb 13 15:38:01.292868 systemd-networkd[1387]: cilium_vxlan: Gained carrier Feb 13 15:38:01.366922 systemd-networkd[1387]: cilium_host: Gained IPv6LL Feb 13 15:38:01.608789 kernel: NET: Registered PF_ALG protocol family Feb 13 15:38:02.214964 systemd-networkd[1387]: cilium_net: Gained IPv6LL Feb 13 15:38:02.373694 systemd-networkd[1387]: lxc_health: Link UP Feb 13 15:38:02.379359 systemd-networkd[1387]: lxc_health: Gained carrier Feb 13 15:38:02.526272 systemd-networkd[1387]: lxc0c45c3f072d7: Link UP Feb 13 15:38:02.532296 kernel: eth0: renamed from tmpd8cad Feb 13 15:38:02.534911 systemd-networkd[1387]: lxc0c45c3f072d7: Gained carrier Feb 13 15:38:03.003658 systemd-networkd[1387]: lxc11b40122f3d9: Link UP Feb 13 15:38:03.012636 kernel: eth0: renamed from tmp79a3a Feb 13 15:38:03.013508 systemd-networkd[1387]: lxc11b40122f3d9: Gained carrier Feb 13 15:38:03.046820 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL Feb 13 15:38:04.070900 systemd-networkd[1387]: lxc_health: Gained IPv6LL Feb 13 15:38:04.072729 systemd-networkd[1387]: lxc11b40122f3d9: Gained IPv6LL Feb 13 15:38:04.327497 systemd-networkd[1387]: lxc0c45c3f072d7: Gained IPv6LL Feb 13 15:38:06.717549 containerd[1505]: time="2025-02-13T15:38:06.717296063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:06.719824 containerd[1505]: time="2025-02-13T15:38:06.719628488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:06.719824 containerd[1505]: time="2025-02-13T15:38:06.719659568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.719824 containerd[1505]: time="2025-02-13T15:38:06.719751969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.752808 systemd[1]: run-containerd-runc-k8s.io-79a3ac9b01f0d1777c5ac5a0df3ed6cfb140244c552b6014631a0c3171796451-runc.W1e23x.mount: Deactivated successfully. Feb 13 15:38:06.768989 systemd[1]: Started cri-containerd-79a3ac9b01f0d1777c5ac5a0df3ed6cfb140244c552b6014631a0c3171796451.scope - libcontainer container 79a3ac9b01f0d1777c5ac5a0df3ed6cfb140244c552b6014631a0c3171796451. Feb 13 15:38:06.818723 containerd[1505]: time="2025-02-13T15:38:06.818654484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bscjt,Uid:09d2cdf9-1f64-4373-bb89-837d5cfeb94e,Namespace:kube-system,Attempt:0,} returns sandbox id \"79a3ac9b01f0d1777c5ac5a0df3ed6cfb140244c552b6014631a0c3171796451\"" Feb 13 15:38:06.826704 containerd[1505]: time="2025-02-13T15:38:06.826371248Z" level=info msg="CreateContainer within sandbox \"79a3ac9b01f0d1777c5ac5a0df3ed6cfb140244c552b6014631a0c3171796451\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:38:06.836398 containerd[1505]: time="2025-02-13T15:38:06.836182475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:06.836398 containerd[1505]: time="2025-02-13T15:38:06.836347356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:06.836605 containerd[1505]: time="2025-02-13T15:38:06.836379277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.836605 containerd[1505]: time="2025-02-13T15:38:06.836549239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:06.858215 systemd[1]: Started cri-containerd-d8cad613d92908c725a3399dd921296a8792e1ad674d14ab8bb5a3c36ac66ac4.scope - libcontainer container d8cad613d92908c725a3399dd921296a8792e1ad674d14ab8bb5a3c36ac66ac4. Feb 13 15:38:06.863741 containerd[1505]: time="2025-02-13T15:38:06.863625853Z" level=info msg="CreateContainer within sandbox \"79a3ac9b01f0d1777c5ac5a0df3ed6cfb140244c552b6014631a0c3171796451\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdaeea34f82f1cd61f292faf9e90c2c8220e3d333ba3a904dc3b87f7c46b9cae\"" Feb 13 15:38:06.866835 containerd[1505]: time="2025-02-13T15:38:06.866775807Z" level=info msg="StartContainer for \"fdaeea34f82f1cd61f292faf9e90c2c8220e3d333ba3a904dc3b87f7c46b9cae\"" Feb 13 15:38:06.914723 systemd[1]: Started cri-containerd-fdaeea34f82f1cd61f292faf9e90c2c8220e3d333ba3a904dc3b87f7c46b9cae.scope - libcontainer container fdaeea34f82f1cd61f292faf9e90c2c8220e3d333ba3a904dc3b87f7c46b9cae. Feb 13 15:38:06.935549 containerd[1505]: time="2025-02-13T15:38:06.935406793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9d2dj,Uid:c714186a-d2d0-4a1e-aa06-71d7a065d6b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8cad613d92908c725a3399dd921296a8792e1ad674d14ab8bb5a3c36ac66ac4\"" Feb 13 15:38:06.946800 containerd[1505]: time="2025-02-13T15:38:06.945957227Z" level=info msg="CreateContainer within sandbox \"d8cad613d92908c725a3399dd921296a8792e1ad674d14ab8bb5a3c36ac66ac4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:38:06.971266 containerd[1505]: time="2025-02-13T15:38:06.971027860Z" level=info msg="StartContainer for \"fdaeea34f82f1cd61f292faf9e90c2c8220e3d333ba3a904dc3b87f7c46b9cae\" returns successfully" Feb 13 15:38:06.976160 containerd[1505]: time="2025-02-13T15:38:06.976104755Z" level=info msg="CreateContainer within sandbox \"d8cad613d92908c725a3399dd921296a8792e1ad674d14ab8bb5a3c36ac66ac4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba374cba65cd292b5845bffa756ea2eecc7e5e953d43a91848e552379ef65849\"" Feb 13 15:38:06.976840 containerd[1505]: time="2025-02-13T15:38:06.976810963Z" level=info msg="StartContainer for \"ba374cba65cd292b5845bffa756ea2eecc7e5e953d43a91848e552379ef65849\"" Feb 13 15:38:07.014810 systemd[1]: Started cri-containerd-ba374cba65cd292b5845bffa756ea2eecc7e5e953d43a91848e552379ef65849.scope - libcontainer container ba374cba65cd292b5845bffa756ea2eecc7e5e953d43a91848e552379ef65849. Feb 13 15:38:07.056546 containerd[1505]: time="2025-02-13T15:38:07.056503794Z" level=info msg="StartContainer for \"ba374cba65cd292b5845bffa756ea2eecc7e5e953d43a91848e552379ef65849\" returns successfully" Feb 13 15:38:07.417808 kubelet[2841]: I0213 15:38:07.417761 2841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:38:07.900408 kubelet[2841]: I0213 15:38:07.899922 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9d2dj" podStartSLOduration=23.899904558 podStartE2EDuration="23.899904558s" podCreationTimestamp="2025-02-13 15:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:07.898912707 +0000 UTC m=+38.313659227" watchObservedRunningTime="2025-02-13 15:38:07.899904558 +0000 UTC m=+38.314651078" Feb 13 15:38:53.861773 update_engine[1489]: I20250213 15:38:53.861685 1489 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:38:53.861773 update_engine[1489]: I20250213 15:38:53.861771 1489 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.862079 1489 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.862728 1489 omaha_request_params.cc:62] Current group set to beta Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.862872 1489 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.862890 1489 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.862918 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.862962 1489 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.863049 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.863065 1489 omaha_request_action.cc:272] Request: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: Feb 13 15:38:53.863793 update_engine[1489]: I20250213 15:38:53.863075 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:38:53.864566 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:38:53.866249 update_engine[1489]: I20250213 15:38:53.866185 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:38:53.866715 update_engine[1489]: I20250213 15:38:53.866668 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:38:53.867526 update_engine[1489]: E20250213 15:38:53.867456 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:38:53.867585 update_engine[1489]: I20250213 15:38:53.867558 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:39:03.825949 update_engine[1489]: I20250213 15:39:03.825446 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:39:03.825949 update_engine[1489]: I20250213 15:39:03.825858 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:39:03.826688 update_engine[1489]: I20250213 15:39:03.826241 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:39:03.827316 update_engine[1489]: E20250213 15:39:03.827174 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:39:03.827316 update_engine[1489]: I20250213 15:39:03.827306 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:39:13.824229 update_engine[1489]: I20250213 15:39:13.824033 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:39:13.825113 update_engine[1489]: I20250213 15:39:13.824527 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:39:13.825400 update_engine[1489]: I20250213 15:39:13.825273 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:39:13.826483 update_engine[1489]: E20250213 15:39:13.826400 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:39:13.826667 update_engine[1489]: I20250213 15:39:13.826491 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:39:23.825132 update_engine[1489]: I20250213 15:39:23.824970 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:39:23.825795 update_engine[1489]: I20250213 15:39:23.825410 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:39:23.826139 update_engine[1489]: I20250213 15:39:23.826081 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:39:23.826578 update_engine[1489]: E20250213 15:39:23.826517 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:39:23.826687 update_engine[1489]: I20250213 15:39:23.826624 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:39:23.826687 update_engine[1489]: I20250213 15:39:23.826644 1489 omaha_request_action.cc:617] Omaha request response: Feb 13 15:39:23.826825 update_engine[1489]: E20250213 15:39:23.826789 1489 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:39:23.826865 update_engine[1489]: I20250213 15:39:23.826822 1489 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:39:23.826865 update_engine[1489]: I20250213 15:39:23.826831 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:39:23.826865 update_engine[1489]: I20250213 15:39:23.826838 1489 update_attempter.cc:306] Processing Done. Feb 13 15:39:23.826865 update_engine[1489]: E20250213 15:39:23.826854 1489 update_attempter.cc:619] Update failed. Feb 13 15:39:23.826963 update_engine[1489]: I20250213 15:39:23.826916 1489 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:39:23.826963 update_engine[1489]: I20250213 15:39:23.826930 1489 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:39:23.826963 update_engine[1489]: I20250213 15:39:23.826938 1489 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:39:23.827044 update_engine[1489]: I20250213 15:39:23.827018 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:39:23.827071 update_engine[1489]: I20250213 15:39:23.827044 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:39:23.827071 update_engine[1489]: I20250213 15:39:23.827052 1489 omaha_request_action.cc:272] Request: Feb 13 15:39:23.827071 update_engine[1489]: Feb 13 15:39:23.827071 update_engine[1489]: Feb 13 15:39:23.827071 update_engine[1489]: Feb 13 15:39:23.827071 update_engine[1489]: Feb 13 15:39:23.827071 update_engine[1489]: Feb 13 15:39:23.827071 update_engine[1489]: Feb 13 15:39:23.827071 update_engine[1489]: I20250213 15:39:23.827059 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:39:23.827391 update_engine[1489]: I20250213 15:39:23.827291 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:39:23.827616 update_engine[1489]: I20250213 15:39:23.827544 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:39:23.827727 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:39:23.828108 update_engine[1489]: E20250213 15:39:23.828032 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:39:23.828207 update_engine[1489]: I20250213 15:39:23.828130 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:39:23.828207 update_engine[1489]: I20250213 15:39:23.828146 1489 omaha_request_action.cc:617] Omaha request response: Feb 13 15:39:23.828207 update_engine[1489]: I20250213 15:39:23.828161 1489 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:39:23.828207 update_engine[1489]: I20250213 15:39:23.828171 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:39:23.828207 update_engine[1489]: I20250213 15:39:23.828181 1489 update_attempter.cc:306] Processing Done. Feb 13 15:39:23.828207 update_engine[1489]: I20250213 15:39:23.828191 1489 update_attempter.cc:310] Error event sent. Feb 13 15:39:23.828542 update_engine[1489]: I20250213 15:39:23.828207 1489 update_check_scheduler.cc:74] Next update check in 45m42s Feb 13 15:39:23.828760 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:41:15.004810 kernel: hrtimer: interrupt took 17919426 ns Feb 13 15:42:18.569131 systemd[1]: Started sshd@7-49.13.212.147:22-139.178.89.65:58690.service - OpenSSH per-connection server daemon (139.178.89.65:58690). Feb 13 15:42:19.569030 sshd[4264]: Accepted publickey for core from 139.178.89.65 port 58690 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:19.571945 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:19.579876 systemd-logind[1487]: New session 8 of user core. Feb 13 15:42:19.584901 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:42:20.353014 sshd[4266]: Connection closed by 139.178.89.65 port 58690 Feb 13 15:42:20.354124 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:20.359695 systemd[1]: sshd@7-49.13.212.147:22-139.178.89.65:58690.service: Deactivated successfully. Feb 13 15:42:20.363547 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:42:20.366016 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:42:20.368234 systemd-logind[1487]: Removed session 8. Feb 13 15:42:25.531041 systemd[1]: Started sshd@8-49.13.212.147:22-139.178.89.65:49470.service - OpenSSH per-connection server daemon (139.178.89.65:49470). Feb 13 15:42:26.512715 sshd[4278]: Accepted publickey for core from 139.178.89.65 port 49470 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:26.514763 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:26.520206 systemd-logind[1487]: New session 9 of user core. Feb 13 15:42:26.525930 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:42:27.273757 sshd[4280]: Connection closed by 139.178.89.65 port 49470 Feb 13 15:42:27.274882 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:27.279931 systemd[1]: sshd@8-49.13.212.147:22-139.178.89.65:49470.service: Deactivated successfully. Feb 13 15:42:27.282424 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:42:27.284913 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:42:27.286687 systemd-logind[1487]: Removed session 9. Feb 13 15:42:32.456088 systemd[1]: Started sshd@9-49.13.212.147:22-139.178.89.65:49472.service - OpenSSH per-connection server daemon (139.178.89.65:49472). Feb 13 15:42:33.460347 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 49472 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:33.462309 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:33.470076 systemd-logind[1487]: New session 10 of user core. Feb 13 15:42:33.480913 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:42:34.227499 sshd[4295]: Connection closed by 139.178.89.65 port 49472 Feb 13 15:42:34.228665 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:34.232791 systemd[1]: sshd@9-49.13.212.147:22-139.178.89.65:49472.service: Deactivated successfully. Feb 13 15:42:34.236410 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:42:34.240049 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:42:34.241782 systemd-logind[1487]: Removed session 10. Feb 13 15:42:34.409085 systemd[1]: Started sshd@10-49.13.212.147:22-139.178.89.65:49474.service - OpenSSH per-connection server daemon (139.178.89.65:49474). Feb 13 15:42:35.400185 sshd[4307]: Accepted publickey for core from 139.178.89.65 port 49474 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:35.402399 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:35.409779 systemd-logind[1487]: New session 11 of user core. Feb 13 15:42:35.412885 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:42:36.198984 sshd[4309]: Connection closed by 139.178.89.65 port 49474 Feb 13 15:42:36.201042 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:36.208304 systemd[1]: sshd@10-49.13.212.147:22-139.178.89.65:49474.service: Deactivated successfully. Feb 13 15:42:36.213201 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:42:36.215612 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:42:36.218731 systemd-logind[1487]: Removed session 11. Feb 13 15:42:36.377129 systemd[1]: Started sshd@11-49.13.212.147:22-139.178.89.65:58536.service - OpenSSH per-connection server daemon (139.178.89.65:58536). Feb 13 15:42:37.373979 sshd[4318]: Accepted publickey for core from 139.178.89.65 port 58536 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:37.376487 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:37.382275 systemd-logind[1487]: New session 12 of user core. Feb 13 15:42:37.386888 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:42:38.145021 sshd[4320]: Connection closed by 139.178.89.65 port 58536 Feb 13 15:42:38.144897 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:38.150154 systemd[1]: sshd@11-49.13.212.147:22-139.178.89.65:58536.service: Deactivated successfully. Feb 13 15:42:38.152144 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:42:38.153512 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:42:38.154973 systemd-logind[1487]: Removed session 12. Feb 13 15:42:43.316333 systemd[1]: Started sshd@12-49.13.212.147:22-139.178.89.65:58546.service - OpenSSH per-connection server daemon (139.178.89.65:58546). Feb 13 15:42:44.305644 sshd[4332]: Accepted publickey for core from 139.178.89.65 port 58546 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:44.308108 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:44.314160 systemd-logind[1487]: New session 13 of user core. Feb 13 15:42:44.318971 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:42:45.060700 sshd[4334]: Connection closed by 139.178.89.65 port 58546 Feb 13 15:42:45.061833 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:45.066509 systemd[1]: sshd@12-49.13.212.147:22-139.178.89.65:58546.service: Deactivated successfully. Feb 13 15:42:45.071116 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:42:45.072394 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:42:45.074076 systemd-logind[1487]: Removed session 13. Feb 13 15:42:45.231935 systemd[1]: Started sshd@13-49.13.212.147:22-139.178.89.65:53512.service - OpenSSH per-connection server daemon (139.178.89.65:53512). Feb 13 15:42:46.224342 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 53512 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:46.226074 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:46.231507 systemd-logind[1487]: New session 14 of user core. Feb 13 15:42:46.237183 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:42:47.022387 sshd[4348]: Connection closed by 139.178.89.65 port 53512 Feb 13 15:42:47.022968 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:47.028428 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:42:47.029093 systemd[1]: sshd@13-49.13.212.147:22-139.178.89.65:53512.service: Deactivated successfully. Feb 13 15:42:47.032540 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:42:47.034227 systemd-logind[1487]: Removed session 14. Feb 13 15:42:47.201661 systemd[1]: Started sshd@14-49.13.212.147:22-139.178.89.65:53524.service - OpenSSH per-connection server daemon (139.178.89.65:53524). Feb 13 15:42:48.195814 sshd[4357]: Accepted publickey for core from 139.178.89.65 port 53524 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:48.198042 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:48.204377 systemd-logind[1487]: New session 15 of user core. Feb 13 15:42:48.209870 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:42:50.545949 sshd[4359]: Connection closed by 139.178.89.65 port 53524 Feb 13 15:42:50.546855 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:50.552587 systemd[1]: sshd@14-49.13.212.147:22-139.178.89.65:53524.service: Deactivated successfully. Feb 13 15:42:50.552926 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:42:50.559240 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:42:50.560836 systemd-logind[1487]: Removed session 15. Feb 13 15:42:50.725106 systemd[1]: Started sshd@15-49.13.212.147:22-139.178.89.65:53528.service - OpenSSH per-connection server daemon (139.178.89.65:53528). Feb 13 15:42:51.728698 sshd[4375]: Accepted publickey for core from 139.178.89.65 port 53528 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:51.730561 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:51.737031 systemd-logind[1487]: New session 16 of user core. Feb 13 15:42:51.746109 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:42:52.612474 sshd[4377]: Connection closed by 139.178.89.65 port 53528 Feb 13 15:42:52.613274 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:52.618548 systemd[1]: sshd@15-49.13.212.147:22-139.178.89.65:53528.service: Deactivated successfully. Feb 13 15:42:52.618694 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:42:52.620995 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:42:52.623589 systemd-logind[1487]: Removed session 16. Feb 13 15:42:52.788700 systemd[1]: Started sshd@16-49.13.212.147:22-139.178.89.65:53540.service - OpenSSH per-connection server daemon (139.178.89.65:53540). Feb 13 15:42:53.776860 sshd[4386]: Accepted publickey for core from 139.178.89.65 port 53540 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:42:53.778662 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:53.784176 systemd-logind[1487]: New session 17 of user core. Feb 13 15:42:53.788893 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:42:54.534550 sshd[4388]: Connection closed by 139.178.89.65 port 53540 Feb 13 15:42:54.533660 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:54.538425 systemd[1]: sshd@16-49.13.212.147:22-139.178.89.65:53540.service: Deactivated successfully. Feb 13 15:42:54.542091 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:42:54.543665 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:42:54.545458 systemd-logind[1487]: Removed session 17. Feb 13 15:42:59.711966 systemd[1]: Started sshd@17-49.13.212.147:22-139.178.89.65:58074.service - OpenSSH per-connection server daemon (139.178.89.65:58074). Feb 13 15:43:00.711034 sshd[4402]: Accepted publickey for core from 139.178.89.65 port 58074 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:43:00.713619 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:00.718875 systemd-logind[1487]: New session 18 of user core. Feb 13 15:43:00.724933 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:43:01.474730 sshd[4404]: Connection closed by 139.178.89.65 port 58074 Feb 13 15:43:01.475551 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:01.480761 systemd[1]: sshd@17-49.13.212.147:22-139.178.89.65:58074.service: Deactivated successfully. Feb 13 15:43:01.483157 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:43:01.485915 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:43:01.487292 systemd-logind[1487]: Removed session 18. Feb 13 15:43:06.645383 systemd[1]: Started sshd@18-49.13.212.147:22-139.178.89.65:42498.service - OpenSSH per-connection server daemon (139.178.89.65:42498). Feb 13 15:43:07.636072 sshd[4415]: Accepted publickey for core from 139.178.89.65 port 42498 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:43:07.637925 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:07.643670 systemd-logind[1487]: New session 19 of user core. Feb 13 15:43:07.652972 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:43:08.380338 sshd[4417]: Connection closed by 139.178.89.65 port 42498 Feb 13 15:43:08.381314 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:08.385984 systemd[1]: sshd@18-49.13.212.147:22-139.178.89.65:42498.service: Deactivated successfully. Feb 13 15:43:08.388326 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:43:08.389223 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:43:08.390221 systemd-logind[1487]: Removed session 19. Feb 13 15:43:08.556137 systemd[1]: Started sshd@19-49.13.212.147:22-139.178.89.65:42500.service - OpenSSH per-connection server daemon (139.178.89.65:42500). Feb 13 15:43:09.541851 sshd[4428]: Accepted publickey for core from 139.178.89.65 port 42500 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:43:09.544027 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:09.548813 systemd-logind[1487]: New session 20 of user core. Feb 13 15:43:09.557853 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:43:12.045659 kubelet[2841]: I0213 15:43:12.043339 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bscjt" podStartSLOduration=328.043322751 podStartE2EDuration="5m28.043322751s" podCreationTimestamp="2025-02-13 15:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:07.965842601 +0000 UTC m=+38.380589121" watchObservedRunningTime="2025-02-13 15:43:12.043322751 +0000 UTC m=+342.458069271" Feb 13 15:43:12.091759 containerd[1505]: time="2025-02-13T15:43:12.091699388Z" level=info msg="StopContainer for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" with timeout 30 (s)" Feb 13 15:43:12.093146 containerd[1505]: time="2025-02-13T15:43:12.092750202Z" level=info msg="Stop container \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" with signal terminated" Feb 13 15:43:12.102639 containerd[1505]: time="2025-02-13T15:43:12.101977044Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:43:12.114037 containerd[1505]: time="2025-02-13T15:43:12.112715065Z" level=info msg="StopContainer for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" with timeout 2 (s)" Feb 13 15:43:12.113073 systemd[1]: cri-containerd-1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8.scope: Deactivated successfully. Feb 13 15:43:12.115412 containerd[1505]: time="2025-02-13T15:43:12.115036176Z" level=info msg="Stop container \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" with signal terminated" Feb 13 15:43:12.127394 systemd-networkd[1387]: lxc_health: Link DOWN Feb 13 15:43:12.127401 systemd-networkd[1387]: lxc_health: Lost carrier Feb 13 15:43:12.148750 systemd[1]: cri-containerd-f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085.scope: Deactivated successfully. Feb 13 15:43:12.149095 systemd[1]: cri-containerd-f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085.scope: Consumed 8.310s CPU time. Feb 13 15:43:12.153443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8-rootfs.mount: Deactivated successfully. Feb 13 15:43:12.167200 containerd[1505]: time="2025-02-13T15:43:12.166787738Z" level=info msg="shim disconnected" id=1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8 namespace=k8s.io Feb 13 15:43:12.167200 containerd[1505]: time="2025-02-13T15:43:12.167050381Z" level=warning msg="cleaning up after shim disconnected" id=1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8 namespace=k8s.io Feb 13 15:43:12.167648 containerd[1505]: time="2025-02-13T15:43:12.167064861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:12.182950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085-rootfs.mount: Deactivated successfully. Feb 13 15:43:12.192488 containerd[1505]: time="2025-02-13T15:43:12.192003950Z" level=info msg="shim disconnected" id=f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085 namespace=k8s.io Feb 13 15:43:12.192488 containerd[1505]: time="2025-02-13T15:43:12.192057751Z" level=warning msg="cleaning up after shim disconnected" id=f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085 namespace=k8s.io Feb 13 15:43:12.193153 containerd[1505]: time="2025-02-13T15:43:12.192065351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:12.194481 containerd[1505]: time="2025-02-13T15:43:12.194184099Z" level=info msg="StopContainer for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" returns successfully" Feb 13 15:43:12.195428 containerd[1505]: time="2025-02-13T15:43:12.194963549Z" level=info msg="StopPodSandbox for \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\"" Feb 13 15:43:12.195428 containerd[1505]: time="2025-02-13T15:43:12.195030910Z" level=info msg="Container to stop \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:12.197336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a-shm.mount: Deactivated successfully. Feb 13 15:43:12.209583 systemd[1]: cri-containerd-20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a.scope: Deactivated successfully. Feb 13 15:43:12.227378 containerd[1505]: time="2025-02-13T15:43:12.226439724Z" level=info msg="StopContainer for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" returns successfully" Feb 13 15:43:12.227773 containerd[1505]: time="2025-02-13T15:43:12.227746821Z" level=info msg="StopPodSandbox for \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\"" Feb 13 15:43:12.227895 containerd[1505]: time="2025-02-13T15:43:12.227879863Z" level=info msg="Container to stop \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:12.227977 containerd[1505]: time="2025-02-13T15:43:12.227963144Z" level=info msg="Container to stop \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:12.228427 containerd[1505]: time="2025-02-13T15:43:12.228402789Z" level=info msg="Container to stop \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:12.228500 containerd[1505]: time="2025-02-13T15:43:12.228487311Z" level=info msg="Container to stop \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:12.228573 containerd[1505]: time="2025-02-13T15:43:12.228545391Z" level=info msg="Container to stop \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:12.238898 systemd[1]: cri-containerd-bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d.scope: Deactivated successfully. Feb 13 15:43:12.261187 containerd[1505]: time="2025-02-13T15:43:12.261085540Z" level=info msg="shim disconnected" id=20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a namespace=k8s.io Feb 13 15:43:12.261187 containerd[1505]: time="2025-02-13T15:43:12.261157621Z" level=warning msg="cleaning up after shim disconnected" id=20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a namespace=k8s.io Feb 13 15:43:12.261187 containerd[1505]: time="2025-02-13T15:43:12.261166621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:12.267778 containerd[1505]: time="2025-02-13T15:43:12.267711947Z" level=info msg="shim disconnected" id=bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d namespace=k8s.io Feb 13 15:43:12.267778 containerd[1505]: time="2025-02-13T15:43:12.267774508Z" level=warning msg="cleaning up after shim disconnected" id=bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d namespace=k8s.io Feb 13 15:43:12.267778 containerd[1505]: time="2025-02-13T15:43:12.267784188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:12.283125 containerd[1505]: time="2025-02-13T15:43:12.283076950Z" level=info msg="TearDown network for sandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" successfully" Feb 13 15:43:12.283585 containerd[1505]: time="2025-02-13T15:43:12.283314393Z" level=info msg="StopPodSandbox for \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" returns successfully" Feb 13 15:43:12.289435 containerd[1505]: time="2025-02-13T15:43:12.289373673Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:43:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:43:12.291175 containerd[1505]: time="2025-02-13T15:43:12.290727651Z" level=info msg="TearDown network for sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" successfully" Feb 13 15:43:12.291175 containerd[1505]: time="2025-02-13T15:43:12.290758651Z" level=info msg="StopPodSandbox for \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" returns successfully" Feb 13 15:43:12.323348 kubelet[2841]: I0213 15:43:12.321089 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-etc-cni-netd\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.323348 kubelet[2841]: I0213 15:43:12.321162 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwb29\" (UniqueName: \"kubernetes.io/projected/63551fa6-a653-407e-b2ba-80bb5f391b3f-kube-api-access-wwb29\") pod \"63551fa6-a653-407e-b2ba-80bb5f391b3f\" (UID: \"63551fa6-a653-407e-b2ba-80bb5f391b3f\") " Feb 13 15:43:12.323348 kubelet[2841]: I0213 15:43:12.321207 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-net\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.323348 kubelet[2841]: I0213 15:43:12.321241 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-hubble-tls\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.323348 kubelet[2841]: I0213 15:43:12.321275 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/954e233d-23e5-4ebe-8a22-329fae13f492-clustermesh-secrets\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.323348 kubelet[2841]: I0213 15:43:12.321304 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-lib-modules\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326127 kubelet[2841]: I0213 15:43:12.321332 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-xtables-lock\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326127 kubelet[2841]: I0213 15:43:12.321360 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-kernel\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326127 kubelet[2841]: I0213 15:43:12.321477 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cni-path\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326127 kubelet[2841]: I0213 15:43:12.321547 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-cgroup\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326127 kubelet[2841]: I0213 15:43:12.321629 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63551fa6-a653-407e-b2ba-80bb5f391b3f-cilium-config-path\") pod \"63551fa6-a653-407e-b2ba-80bb5f391b3f\" (UID: \"63551fa6-a653-407e-b2ba-80bb5f391b3f\") " Feb 13 15:43:12.326127 kubelet[2841]: I0213 15:43:12.321661 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-hostproc\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326356 kubelet[2841]: I0213 15:43:12.321690 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-run\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326356 kubelet[2841]: I0213 15:43:12.321723 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r8pb\" (UniqueName: \"kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-kube-api-access-2r8pb\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326356 kubelet[2841]: I0213 15:43:12.321755 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-config-path\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326356 kubelet[2841]: I0213 15:43:12.321786 2841 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-bpf-maps\") pod \"954e233d-23e5-4ebe-8a22-329fae13f492\" (UID: \"954e233d-23e5-4ebe-8a22-329fae13f492\") " Feb 13 15:43:12.326356 kubelet[2841]: I0213 15:43:12.321910 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.326356 kubelet[2841]: I0213 15:43:12.322087 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.326547 kubelet[2841]: I0213 15:43:12.322866 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cni-path" (OuterVolumeSpecName: "cni-path") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.326547 kubelet[2841]: I0213 15:43:12.322920 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.326547 kubelet[2841]: I0213 15:43:12.324213 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.326547 kubelet[2841]: I0213 15:43:12.324438 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-hostproc" (OuterVolumeSpecName: "hostproc") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.326547 kubelet[2841]: I0213 15:43:12.324465 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.333800 kubelet[2841]: I0213 15:43:12.333761 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.333892 kubelet[2841]: I0213 15:43:12.333811 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.333892 kubelet[2841]: I0213 15:43:12.333828 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:12.334299 kubelet[2841]: I0213 15:43:12.334272 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:12.335746 kubelet[2841]: I0213 15:43:12.335482 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63551fa6-a653-407e-b2ba-80bb5f391b3f-kube-api-access-wwb29" (OuterVolumeSpecName: "kube-api-access-wwb29") pod "63551fa6-a653-407e-b2ba-80bb5f391b3f" (UID: "63551fa6-a653-407e-b2ba-80bb5f391b3f"). InnerVolumeSpecName "kube-api-access-wwb29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:12.335999 kubelet[2841]: I0213 15:43:12.335966 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-kube-api-access-2r8pb" (OuterVolumeSpecName: "kube-api-access-2r8pb") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "kube-api-access-2r8pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:12.336117 kubelet[2841]: I0213 15:43:12.336056 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:12.337549 kubelet[2841]: I0213 15:43:12.337454 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63551fa6-a653-407e-b2ba-80bb5f391b3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "63551fa6-a653-407e-b2ba-80bb5f391b3f" (UID: "63551fa6-a653-407e-b2ba-80bb5f391b3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:12.338027 kubelet[2841]: I0213 15:43:12.337981 2841 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/954e233d-23e5-4ebe-8a22-329fae13f492-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "954e233d-23e5-4ebe-8a22-329fae13f492" (UID: "954e233d-23e5-4ebe-8a22-329fae13f492"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422782 2841 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63551fa6-a653-407e-b2ba-80bb5f391b3f-cilium-config-path\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422822 2841 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-hostproc\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422834 2841 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-run\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422847 2841 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2r8pb\" (UniqueName: \"kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-kube-api-access-2r8pb\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422857 2841 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-config-path\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422868 2841 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-bpf-maps\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422878 2841 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-etc-cni-netd\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.423264 kubelet[2841]: I0213 15:43:12.422887 2841 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wwb29\" (UniqueName: \"kubernetes.io/projected/63551fa6-a653-407e-b2ba-80bb5f391b3f-kube-api-access-wwb29\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422895 2841 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-net\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422906 2841 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/954e233d-23e5-4ebe-8a22-329fae13f492-hubble-tls\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422938 2841 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/954e233d-23e5-4ebe-8a22-329fae13f492-clustermesh-secrets\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422949 2841 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-lib-modules\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422958 2841 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-xtables-lock\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422967 2841 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-host-proc-sys-kernel\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422975 2841 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cni-path\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.424375 kubelet[2841]: I0213 15:43:12.422983 2841 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/954e233d-23e5-4ebe-8a22-329fae13f492-cilium-cgroup\") on node \"ci-4186-1-1-3-ffab21d6e1\" DevicePath \"\"" Feb 13 15:43:12.692367 kubelet[2841]: I0213 15:43:12.692192 2841 scope.go:117] "RemoveContainer" containerID="1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8" Feb 13 15:43:12.697512 containerd[1505]: time="2025-02-13T15:43:12.697415728Z" level=info msg="RemoveContainer for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\"" Feb 13 15:43:12.703851 systemd[1]: Removed slice kubepods-besteffort-pod63551fa6_a653_407e_b2ba_80bb5f391b3f.slice - libcontainer container kubepods-besteffort-pod63551fa6_a653_407e_b2ba_80bb5f391b3f.slice. Feb 13 15:43:12.706437 containerd[1505]: time="2025-02-13T15:43:12.706214484Z" level=info msg="RemoveContainer for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" returns successfully" Feb 13 15:43:12.708853 kubelet[2841]: I0213 15:43:12.708819 2841 scope.go:117] "RemoveContainer" containerID="1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8" Feb 13 15:43:12.710581 containerd[1505]: time="2025-02-13T15:43:12.710479381Z" level=error msg="ContainerStatus for \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\": not found" Feb 13 15:43:12.711007 kubelet[2841]: E0213 15:43:12.710779 2841 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\": not found" containerID="1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8" Feb 13 15:43:12.711007 kubelet[2841]: I0213 15:43:12.710813 2841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8"} err="failed to get container status \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d7f2d34f591bea62b57c3e331a6fb4bf9eaca23fad542c9a64f40d5fdc7d2f8\": not found" Feb 13 15:43:12.711007 kubelet[2841]: I0213 15:43:12.710889 2841 scope.go:117] "RemoveContainer" containerID="f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085" Feb 13 15:43:12.713796 systemd[1]: Removed slice kubepods-burstable-pod954e233d_23e5_4ebe_8a22_329fae13f492.slice - libcontainer container kubepods-burstable-pod954e233d_23e5_4ebe_8a22_329fae13f492.slice. Feb 13 15:43:12.713920 systemd[1]: kubepods-burstable-pod954e233d_23e5_4ebe_8a22_329fae13f492.slice: Consumed 8.403s CPU time. Feb 13 15:43:12.717065 containerd[1505]: time="2025-02-13T15:43:12.716543380Z" level=info msg="RemoveContainer for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\"" Feb 13 15:43:12.722180 containerd[1505]: time="2025-02-13T15:43:12.722129974Z" level=info msg="RemoveContainer for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" returns successfully" Feb 13 15:43:12.722535 kubelet[2841]: I0213 15:43:12.722406 2841 scope.go:117] "RemoveContainer" containerID="f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37" Feb 13 15:43:12.725043 containerd[1505]: time="2025-02-13T15:43:12.724972211Z" level=info msg="RemoveContainer for \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\"" Feb 13 15:43:12.730163 containerd[1505]: time="2025-02-13T15:43:12.730116919Z" level=info msg="RemoveContainer for \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\" returns successfully" Feb 13 15:43:12.730658 kubelet[2841]: I0213 15:43:12.730504 2841 scope.go:117] "RemoveContainer" containerID="ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024" Feb 13 15:43:12.735307 containerd[1505]: time="2025-02-13T15:43:12.735271467Z" level=info msg="RemoveContainer for \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\"" Feb 13 15:43:12.740990 containerd[1505]: time="2025-02-13T15:43:12.740924302Z" level=info msg="RemoveContainer for \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\" returns successfully" Feb 13 15:43:12.741361 kubelet[2841]: I0213 15:43:12.741260 2841 scope.go:117] "RemoveContainer" containerID="681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9" Feb 13 15:43:12.745239 containerd[1505]: time="2025-02-13T15:43:12.744921354Z" level=info msg="RemoveContainer for \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\"" Feb 13 15:43:12.749823 containerd[1505]: time="2025-02-13T15:43:12.749770538Z" level=info msg="RemoveContainer for \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\" returns successfully" Feb 13 15:43:12.750390 kubelet[2841]: I0213 15:43:12.750100 2841 scope.go:117] "RemoveContainer" containerID="d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7" Feb 13 15:43:12.751586 containerd[1505]: time="2025-02-13T15:43:12.751559362Z" level=info msg="RemoveContainer for \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\"" Feb 13 15:43:12.758917 containerd[1505]: time="2025-02-13T15:43:12.758794097Z" level=info msg="RemoveContainer for \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\" returns successfully" Feb 13 15:43:12.759641 kubelet[2841]: I0213 15:43:12.759496 2841 scope.go:117] "RemoveContainer" containerID="f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085" Feb 13 15:43:12.760372 containerd[1505]: time="2025-02-13T15:43:12.760057514Z" level=error msg="ContainerStatus for \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\": not found" Feb 13 15:43:12.760458 kubelet[2841]: E0213 15:43:12.760232 2841 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\": not found" containerID="f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085" Feb 13 15:43:12.760458 kubelet[2841]: I0213 15:43:12.760261 2841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085"} err="failed to get container status \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\": rpc error: code = NotFound desc = an error occurred when try to find container \"f57cb9d3a007d62ede304012058dd906c3c97c51d64e22fe7c26885075fca085\": not found" Feb 13 15:43:12.760458 kubelet[2841]: I0213 15:43:12.760285 2841 scope.go:117] "RemoveContainer" containerID="f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37" Feb 13 15:43:12.761131 containerd[1505]: time="2025-02-13T15:43:12.761083007Z" level=error msg="ContainerStatus for \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\": not found" Feb 13 15:43:12.761554 kubelet[2841]: E0213 15:43:12.761362 2841 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\": not found" containerID="f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37" Feb 13 15:43:12.761554 kubelet[2841]: I0213 15:43:12.761397 2841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37"} err="failed to get container status \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\": rpc error: code = NotFound desc = an error occurred when try to find container \"f83780da1a30d42c50172cd1912d6ca818608ac8074a9fa8b6c89e310f5a2a37\": not found" Feb 13 15:43:12.761554 kubelet[2841]: I0213 15:43:12.761417 2841 scope.go:117] "RemoveContainer" containerID="ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024" Feb 13 15:43:12.762245 containerd[1505]: time="2025-02-13T15:43:12.761851577Z" level=error msg="ContainerStatus for \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\": not found" Feb 13 15:43:12.762328 kubelet[2841]: E0213 15:43:12.762076 2841 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\": not found" containerID="ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024" Feb 13 15:43:12.762328 kubelet[2841]: I0213 15:43:12.762111 2841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024"} err="failed to get container status \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca7e49730d9ab721f8d4a6e49743c029f6f5f8353235ed0794bb84e5e090f024\": not found" Feb 13 15:43:12.762328 kubelet[2841]: I0213 15:43:12.762134 2841 scope.go:117] "RemoveContainer" containerID="681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9" Feb 13 15:43:12.762883 containerd[1505]: time="2025-02-13T15:43:12.762794030Z" level=error msg="ContainerStatus for \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\": not found" Feb 13 15:43:12.763231 kubelet[2841]: E0213 15:43:12.763070 2841 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\": not found" containerID="681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9" Feb 13 15:43:12.763231 kubelet[2841]: I0213 15:43:12.763122 2841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9"} err="failed to get container status \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"681c2547599e96e6db2eb84b011619f813f4ac44a3865228df618457e23e53f9\": not found" Feb 13 15:43:12.763231 kubelet[2841]: I0213 15:43:12.763144 2841 scope.go:117] "RemoveContainer" containerID="d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7" Feb 13 15:43:12.763940 containerd[1505]: time="2025-02-13T15:43:12.763878564Z" level=error msg="ContainerStatus for \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\": not found" Feb 13 15:43:12.764154 kubelet[2841]: E0213 15:43:12.764108 2841 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\": not found" containerID="d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7" Feb 13 15:43:12.764154 kubelet[2841]: I0213 15:43:12.764134 2841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7"} err="failed to get container status \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9b7a99406ebf725ec1f9812df4517da72beff6868a64054a6003798934d4be7\": not found" Feb 13 15:43:13.072044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a-rootfs.mount: Deactivated successfully. Feb 13 15:43:13.072157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d-rootfs.mount: Deactivated successfully. Feb 13 15:43:13.072225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d-shm.mount: Deactivated successfully. Feb 13 15:43:13.072345 systemd[1]: var-lib-kubelet-pods-63551fa6\x2da653\x2d407e\x2db2ba\x2d80bb5f391b3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwb29.mount: Deactivated successfully. Feb 13 15:43:13.072414 systemd[1]: var-lib-kubelet-pods-954e233d\x2d23e5\x2d4ebe\x2d8a22\x2d329fae13f492-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2r8pb.mount: Deactivated successfully. Feb 13 15:43:13.072468 systemd[1]: var-lib-kubelet-pods-954e233d\x2d23e5\x2d4ebe\x2d8a22\x2d329fae13f492-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:43:13.072558 systemd[1]: var-lib-kubelet-pods-954e233d\x2d23e5\x2d4ebe\x2d8a22\x2d329fae13f492-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:43:13.692608 kubelet[2841]: I0213 15:43:13.692522 2841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63551fa6-a653-407e-b2ba-80bb5f391b3f" path="/var/lib/kubelet/pods/63551fa6-a653-407e-b2ba-80bb5f391b3f/volumes" Feb 13 15:43:13.693473 kubelet[2841]: I0213 15:43:13.693408 2841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" path="/var/lib/kubelet/pods/954e233d-23e5-4ebe-8a22-329fae13f492/volumes" Feb 13 15:43:14.154393 sshd[4430]: Connection closed by 139.178.89.65 port 42500 Feb 13 15:43:14.155575 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:14.161681 systemd[1]: sshd@19-49.13.212.147:22-139.178.89.65:42500.service: Deactivated successfully. Feb 13 15:43:14.164477 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:43:14.164836 systemd[1]: session-20.scope: Consumed 1.361s CPU time. Feb 13 15:43:14.166185 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:43:14.167260 systemd-logind[1487]: Removed session 20. Feb 13 15:43:14.336078 systemd[1]: Started sshd@20-49.13.212.147:22-139.178.89.65:42508.service - OpenSSH per-connection server daemon (139.178.89.65:42508). Feb 13 15:43:14.881143 kubelet[2841]: E0213 15:43:14.880989 2841 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:43:15.323175 sshd[4593]: Accepted publickey for core from 139.178.89.65 port 42508 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:43:15.325582 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:15.332128 systemd-logind[1487]: New session 21 of user core. Feb 13 15:43:15.340902 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:43:16.360327 kubelet[2841]: I0213 15:43:16.358554 2841 setters.go:580] "Node became not ready" node="ci-4186-1-1-3-ffab21d6e1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:43:16Z","lastTransitionTime":"2025-02-13T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:43:16.720787 kubelet[2841]: I0213 15:43:16.720587 2841 topology_manager.go:215] "Topology Admit Handler" podUID="83ee47b6-4600-4432-b3a2-f0a24e204495" podNamespace="kube-system" podName="cilium-mh9nr" Feb 13 15:43:16.721349 kubelet[2841]: E0213 15:43:16.721248 2841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" containerName="mount-bpf-fs" Feb 13 15:43:16.721349 kubelet[2841]: E0213 15:43:16.721283 2841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" containerName="clean-cilium-state" Feb 13 15:43:16.721349 kubelet[2841]: E0213 15:43:16.721290 2841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" containerName="apply-sysctl-overwrites" Feb 13 15:43:16.721349 kubelet[2841]: E0213 15:43:16.721296 2841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63551fa6-a653-407e-b2ba-80bb5f391b3f" containerName="cilium-operator" Feb 13 15:43:16.721349 kubelet[2841]: E0213 15:43:16.721302 2841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" containerName="cilium-agent" Feb 13 15:43:16.721349 kubelet[2841]: E0213 15:43:16.721310 2841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" containerName="mount-cgroup" Feb 13 15:43:16.721964 kubelet[2841]: I0213 15:43:16.721337 2841 memory_manager.go:354] "RemoveStaleState removing state" podUID="954e233d-23e5-4ebe-8a22-329fae13f492" containerName="cilium-agent" Feb 13 15:43:16.721964 kubelet[2841]: I0213 15:43:16.721635 2841 memory_manager.go:354] "RemoveStaleState removing state" podUID="63551fa6-a653-407e-b2ba-80bb5f391b3f" containerName="cilium-operator" Feb 13 15:43:16.732925 systemd[1]: Created slice kubepods-burstable-pod83ee47b6_4600_4432_b3a2_f0a24e204495.slice - libcontainer container kubepods-burstable-pod83ee47b6_4600_4432_b3a2_f0a24e204495.slice. Feb 13 15:43:16.737049 kubelet[2841]: W0213 15:43:16.736155 2841 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737049 kubelet[2841]: E0213 15:43:16.736211 2841 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737049 kubelet[2841]: W0213 15:43:16.736830 2841 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737049 kubelet[2841]: E0213 15:43:16.736857 2841 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737049 kubelet[2841]: W0213 15:43:16.736901 2841 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737290 kubelet[2841]: E0213 15:43:16.736911 2841 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737290 kubelet[2841]: W0213 15:43:16.737001 2841 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.737290 kubelet[2841]: E0213 15:43:16.737017 2841 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4186-1-1-3-ffab21d6e1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-3-ffab21d6e1' and this object Feb 13 15:43:16.848428 kubelet[2841]: I0213 15:43:16.848233 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83ee47b6-4600-4432-b3a2-f0a24e204495-cilium-config-path\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848428 kubelet[2841]: I0213 15:43:16.848310 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83ee47b6-4600-4432-b3a2-f0a24e204495-cilium-ipsec-secrets\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848428 kubelet[2841]: I0213 15:43:16.848342 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-host-proc-sys-kernel\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848428 kubelet[2841]: I0213 15:43:16.848379 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-hostproc\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848428 kubelet[2841]: I0213 15:43:16.848426 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-etc-cni-netd\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848923 kubelet[2841]: I0213 15:43:16.848465 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-host-proc-sys-net\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848923 kubelet[2841]: I0213 15:43:16.848495 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83ee47b6-4600-4432-b3a2-f0a24e204495-hubble-tls\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848923 kubelet[2841]: I0213 15:43:16.848528 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-cilium-run\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848923 kubelet[2841]: I0213 15:43:16.848571 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-cilium-cgroup\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848923 kubelet[2841]: I0213 15:43:16.848621 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-cni-path\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.848923 kubelet[2841]: I0213 15:43:16.848657 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-xtables-lock\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.849265 kubelet[2841]: I0213 15:43:16.848685 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-lib-modules\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.849265 kubelet[2841]: I0213 15:43:16.848715 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw5jt\" (UniqueName: \"kubernetes.io/projected/83ee47b6-4600-4432-b3a2-f0a24e204495-kube-api-access-qw5jt\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.849265 kubelet[2841]: I0213 15:43:16.848743 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83ee47b6-4600-4432-b3a2-f0a24e204495-bpf-maps\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.849265 kubelet[2841]: I0213 15:43:16.848769 2841 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83ee47b6-4600-4432-b3a2-f0a24e204495-clustermesh-secrets\") pod \"cilium-mh9nr\" (UID: \"83ee47b6-4600-4432-b3a2-f0a24e204495\") " pod="kube-system/cilium-mh9nr" Feb 13 15:43:16.890661 sshd[4595]: Connection closed by 139.178.89.65 port 42508 Feb 13 15:43:16.892147 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:16.899789 systemd[1]: sshd@20-49.13.212.147:22-139.178.89.65:42508.service: Deactivated successfully. Feb 13 15:43:16.904074 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:43:16.905714 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:43:16.907943 systemd-logind[1487]: Removed session 21. Feb 13 15:43:17.070017 systemd[1]: Started sshd@21-49.13.212.147:22-139.178.89.65:40714.service - OpenSSH per-connection server daemon (139.178.89.65:40714). Feb 13 15:43:17.951230 kubelet[2841]: E0213 15:43:17.951136 2841 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:17.951230 kubelet[2841]: E0213 15:43:17.951193 2841 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-mh9nr: failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:17.952001 kubelet[2841]: E0213 15:43:17.951290 2841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/83ee47b6-4600-4432-b3a2-f0a24e204495-hubble-tls podName:83ee47b6-4600-4432-b3a2-f0a24e204495 nodeName:}" failed. No retries permitted until 2025-02-13 15:43:18.45125541 +0000 UTC m=+348.866001930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/83ee47b6-4600-4432-b3a2-f0a24e204495-hubble-tls") pod "cilium-mh9nr" (UID: "83ee47b6-4600-4432-b3a2-f0a24e204495") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:17.952400 kubelet[2841]: E0213 15:43:17.952233 2841 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:17.952400 kubelet[2841]: E0213 15:43:17.952364 2841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83ee47b6-4600-4432-b3a2-f0a24e204495-clustermesh-secrets podName:83ee47b6-4600-4432-b3a2-f0a24e204495 nodeName:}" failed. No retries permitted until 2025-02-13 15:43:18.452337264 +0000 UTC m=+348.867083824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/83ee47b6-4600-4432-b3a2-f0a24e204495-clustermesh-secrets") pod "cilium-mh9nr" (UID: "83ee47b6-4600-4432-b3a2-f0a24e204495") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:18.066580 sshd[4609]: Accepted publickey for core from 139.178.89.65 port 40714 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:43:18.069506 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:18.075710 systemd-logind[1487]: New session 22 of user core. Feb 13 15:43:18.091998 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:43:18.540298 containerd[1505]: time="2025-02-13T15:43:18.540191055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh9nr,Uid:83ee47b6-4600-4432-b3a2-f0a24e204495,Namespace:kube-system,Attempt:0,}" Feb 13 15:43:18.567080 containerd[1505]: time="2025-02-13T15:43:18.566926607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:18.567080 containerd[1505]: time="2025-02-13T15:43:18.567030249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:18.567080 containerd[1505]: time="2025-02-13T15:43:18.567042209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:18.567520 containerd[1505]: time="2025-02-13T15:43:18.567458174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:18.594883 systemd[1]: Started cri-containerd-8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100.scope - libcontainer container 8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100. Feb 13 15:43:18.623840 containerd[1505]: time="2025-02-13T15:43:18.623309311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh9nr,Uid:83ee47b6-4600-4432-b3a2-f0a24e204495,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\"" Feb 13 15:43:18.628477 containerd[1505]: time="2025-02-13T15:43:18.628338697Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:43:18.647821 containerd[1505]: time="2025-02-13T15:43:18.647748273Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe\"" Feb 13 15:43:18.651783 containerd[1505]: time="2025-02-13T15:43:18.648858847Z" level=info msg="StartContainer for \"8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe\"" Feb 13 15:43:18.677813 systemd[1]: Started cri-containerd-8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe.scope - libcontainer container 8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe. Feb 13 15:43:18.709184 containerd[1505]: time="2025-02-13T15:43:18.709139722Z" level=info msg="StartContainer for \"8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe\" returns successfully" Feb 13 15:43:18.722294 systemd[1]: cri-containerd-8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe.scope: Deactivated successfully. Feb 13 15:43:18.745940 sshd[4612]: Connection closed by 139.178.89.65 port 40714 Feb 13 15:43:18.747417 sshd-session[4609]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:18.754211 systemd[1]: sshd@21-49.13.212.147:22-139.178.89.65:40714.service: Deactivated successfully. Feb 13 15:43:18.755918 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:43:18.757699 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:43:18.763139 systemd-logind[1487]: Removed session 22. Feb 13 15:43:18.780709 containerd[1505]: time="2025-02-13T15:43:18.780562424Z" level=info msg="shim disconnected" id=8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe namespace=k8s.io Feb 13 15:43:18.781432 containerd[1505]: time="2025-02-13T15:43:18.781117471Z" level=warning msg="cleaning up after shim disconnected" id=8febf3b9581dfb73fd5603fb52d5774881244a4ac31fb956b43ca30f8f90f3fe namespace=k8s.io Feb 13 15:43:18.781432 containerd[1505]: time="2025-02-13T15:43:18.781154792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:18.794958 containerd[1505]: time="2025-02-13T15:43:18.794717090Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:43:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:43:18.924220 systemd[1]: Started sshd@22-49.13.212.147:22-139.178.89.65:40720.service - OpenSSH per-connection server daemon (139.178.89.65:40720). Feb 13 15:43:19.742096 containerd[1505]: time="2025-02-13T15:43:19.742050182Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:43:19.758576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556817520.mount: Deactivated successfully. Feb 13 15:43:19.759388 containerd[1505]: time="2025-02-13T15:43:19.759197288Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a\"" Feb 13 15:43:19.762517 containerd[1505]: time="2025-02-13T15:43:19.762055246Z" level=info msg="StartContainer for \"db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a\"" Feb 13 15:43:19.795064 systemd[1]: Started cri-containerd-db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a.scope - libcontainer container db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a. Feb 13 15:43:19.830389 containerd[1505]: time="2025-02-13T15:43:19.830338706Z" level=info msg="StartContainer for \"db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a\" returns successfully" Feb 13 15:43:19.842146 systemd[1]: cri-containerd-db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a.scope: Deactivated successfully. Feb 13 15:43:19.868870 containerd[1505]: time="2025-02-13T15:43:19.868806933Z" level=info msg="shim disconnected" id=db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a namespace=k8s.io Feb 13 15:43:19.868870 containerd[1505]: time="2025-02-13T15:43:19.868865574Z" level=warning msg="cleaning up after shim disconnected" id=db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a namespace=k8s.io Feb 13 15:43:19.868870 containerd[1505]: time="2025-02-13T15:43:19.868875294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:19.882136 kubelet[2841]: E0213 15:43:19.882058 2841 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:43:19.929855 sshd[4725]: Accepted publickey for core from 139.178.89.65 port 40720 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:43:19.933445 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:19.939757 systemd-logind[1487]: New session 23 of user core. Feb 13 15:43:19.944882 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:43:20.469064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db34079fa69b59f4ceb0d9b320451479bea7ef1b33566126d11897d2a38fb41a-rootfs.mount: Deactivated successfully. Feb 13 15:43:20.744188 containerd[1505]: time="2025-02-13T15:43:20.743954954Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:43:20.767065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018625551.mount: Deactivated successfully. Feb 13 15:43:20.771200 containerd[1505]: time="2025-02-13T15:43:20.771141073Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f\"" Feb 13 15:43:20.771774 containerd[1505]: time="2025-02-13T15:43:20.771746561Z" level=info msg="StartContainer for \"df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f\"" Feb 13 15:43:20.808792 systemd[1]: Started cri-containerd-df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f.scope - libcontainer container df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f. Feb 13 15:43:20.862173 containerd[1505]: time="2025-02-13T15:43:20.862129753Z" level=info msg="StartContainer for \"df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f\" returns successfully" Feb 13 15:43:20.864698 systemd[1]: cri-containerd-df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f.scope: Deactivated successfully. Feb 13 15:43:20.913038 containerd[1505]: time="2025-02-13T15:43:20.912796101Z" level=info msg="shim disconnected" id=df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f namespace=k8s.io Feb 13 15:43:20.913038 containerd[1505]: time="2025-02-13T15:43:20.912852261Z" level=warning msg="cleaning up after shim disconnected" id=df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f namespace=k8s.io Feb 13 15:43:20.913038 containerd[1505]: time="2025-02-13T15:43:20.912861302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:21.467282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df61b93491b960d6dca874715e4309fb263c71688eb5fd9eb7cd91b6d428c92f-rootfs.mount: Deactivated successfully. Feb 13 15:43:21.752742 containerd[1505]: time="2025-02-13T15:43:21.752605897Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:43:21.772094 containerd[1505]: time="2025-02-13T15:43:21.771940712Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743\"" Feb 13 15:43:21.773983 containerd[1505]: time="2025-02-13T15:43:21.773919618Z" level=info msg="StartContainer for \"26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743\"" Feb 13 15:43:21.804202 systemd[1]: Started cri-containerd-26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743.scope - libcontainer container 26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743. Feb 13 15:43:21.829951 systemd[1]: cri-containerd-26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743.scope: Deactivated successfully. Feb 13 15:43:21.833627 containerd[1505]: time="2025-02-13T15:43:21.833125839Z" level=info msg="StartContainer for \"26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743\" returns successfully" Feb 13 15:43:21.850063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743-rootfs.mount: Deactivated successfully. Feb 13 15:43:21.855307 containerd[1505]: time="2025-02-13T15:43:21.855100249Z" level=info msg="shim disconnected" id=26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743 namespace=k8s.io Feb 13 15:43:21.855307 containerd[1505]: time="2025-02-13T15:43:21.855156570Z" level=warning msg="cleaning up after shim disconnected" id=26036a206f700c8e758599f9f6271fdc0dfe21582c541bd43b0b4b104ee32743 namespace=k8s.io Feb 13 15:43:21.855307 containerd[1505]: time="2025-02-13T15:43:21.855164570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:22.758240 containerd[1505]: time="2025-02-13T15:43:22.758188241Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:43:22.790394 containerd[1505]: time="2025-02-13T15:43:22.790191823Z" level=info msg="CreateContainer within sandbox \"8a5ef529cbb9102efd8dd7a148c29d093943bf7cff4eff938e22f82ca547f100\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8\"" Feb 13 15:43:22.793759 containerd[1505]: time="2025-02-13T15:43:22.791800684Z" level=info msg="StartContainer for \"e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8\"" Feb 13 15:43:22.827825 systemd[1]: Started cri-containerd-e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8.scope - libcontainer container e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8. Feb 13 15:43:22.866756 containerd[1505]: time="2025-02-13T15:43:22.866527750Z" level=info msg="StartContainer for \"e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8\" returns successfully" Feb 13 15:43:23.214153 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:43:23.780701 kubelet[2841]: I0213 15:43:23.780401 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mh9nr" podStartSLOduration=7.780383206 podStartE2EDuration="7.780383206s" podCreationTimestamp="2025-02-13 15:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:43:23.780283124 +0000 UTC m=+354.195029644" watchObservedRunningTime="2025-02-13 15:43:23.780383206 +0000 UTC m=+354.195129726" Feb 13 15:43:26.261129 systemd-networkd[1387]: lxc_health: Link UP Feb 13 15:43:26.268231 systemd-networkd[1387]: lxc_health: Gained carrier Feb 13 15:43:27.849916 systemd-networkd[1387]: lxc_health: Gained IPv6LL Feb 13 15:43:29.725323 containerd[1505]: time="2025-02-13T15:43:29.725087217Z" level=info msg="StopPodSandbox for \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\"" Feb 13 15:43:29.725323 containerd[1505]: time="2025-02-13T15:43:29.725207258Z" level=info msg="TearDown network for sandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" successfully" Feb 13 15:43:29.725323 containerd[1505]: time="2025-02-13T15:43:29.725220139Z" level=info msg="StopPodSandbox for \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" returns successfully" Feb 13 15:43:29.726169 containerd[1505]: time="2025-02-13T15:43:29.725809026Z" level=info msg="RemovePodSandbox for \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\"" Feb 13 15:43:29.726169 containerd[1505]: time="2025-02-13T15:43:29.725844787Z" level=info msg="Forcibly stopping sandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\"" Feb 13 15:43:29.726169 containerd[1505]: time="2025-02-13T15:43:29.725899508Z" level=info msg="TearDown network for sandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" successfully" Feb 13 15:43:29.731853 containerd[1505]: time="2025-02-13T15:43:29.731421580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:29.731853 containerd[1505]: time="2025-02-13T15:43:29.731491221Z" level=info msg="RemovePodSandbox \"20ffb6cc88fe2ec252d641032ce58f02fe673152d246de827c43c8d5197d9e9a\" returns successfully" Feb 13 15:43:29.732418 containerd[1505]: time="2025-02-13T15:43:29.732244151Z" level=info msg="StopPodSandbox for \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\"" Feb 13 15:43:29.732418 containerd[1505]: time="2025-02-13T15:43:29.732362753Z" level=info msg="TearDown network for sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" successfully" Feb 13 15:43:29.732418 containerd[1505]: time="2025-02-13T15:43:29.732373033Z" level=info msg="StopPodSandbox for \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" returns successfully" Feb 13 15:43:29.734280 containerd[1505]: time="2025-02-13T15:43:29.733767531Z" level=info msg="RemovePodSandbox for \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\"" Feb 13 15:43:29.734280 containerd[1505]: time="2025-02-13T15:43:29.733823572Z" level=info msg="Forcibly stopping sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\"" Feb 13 15:43:29.734280 containerd[1505]: time="2025-02-13T15:43:29.733952734Z" level=info msg="TearDown network for sandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" successfully" Feb 13 15:43:29.740400 containerd[1505]: time="2025-02-13T15:43:29.740314618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:29.740707 containerd[1505]: time="2025-02-13T15:43:29.740569861Z" level=info msg="RemovePodSandbox \"bfeacc5a83d2bdf0af8af3e3699d3c969f8e0c05599c4eccd11d3eeeeb16807d\" returns successfully" Feb 13 15:43:31.145317 systemd[1]: run-containerd-runc-k8s.io-e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8-runc.j9eWxb.mount: Deactivated successfully. Feb 13 15:43:31.206354 kubelet[2841]: E0213 15:43:31.206254 2841 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55878->127.0.0.1:44327: write tcp 127.0.0.1:55878->127.0.0.1:44327: write: broken pipe Feb 13 15:43:33.293540 systemd[1]: run-containerd-runc-k8s.io-e7648707f8a911cbac95215071d4374c4f80fb18690ddeda47e82ff759afe2f8-runc.IzV5zu.mount: Deactivated successfully. Feb 13 15:43:33.510493 sshd[4788]: Connection closed by 139.178.89.65 port 40720 Feb 13 15:43:33.509785 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:33.514981 systemd[1]: sshd@22-49.13.212.147:22-139.178.89.65:40720.service: Deactivated successfully. Feb 13 15:43:33.518151 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:43:33.521501 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:43:33.522922 systemd-logind[1487]: Removed session 23.