Jan 13 20:16:49.891268 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:49.891307 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:16:49.891318 kernel: KASLR enabled Jan 13 20:16:49.891324 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:49.891330 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133c6b018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132357218 Jan 13 20:16:49.891335 kernel: random: crng init done Jan 13 20:16:49.891342 kernel: secureboot: Secure boot disabled Jan 13 20:16:49.891348 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:49.891354 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:16:49.891360 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:49.891368 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891373 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891379 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891385 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891392 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891399 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891406 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891412 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891418 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.891424 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:16:49.891430 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:16:49.891436 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:49.891442 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:49.891449 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 13 20:16:49.891455 kernel: Zone ranges: Jan 13 20:16:49.891461 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:16:49.891469 kernel: DMA32 empty Jan 13 20:16:49.891475 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:16:49.891481 kernel: Movable zone start for each node Jan 13 20:16:49.891487 kernel: Early memory node ranges Jan 13 20:16:49.891493 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:16:49.891499 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:16:49.891505 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:16:49.891511 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:16:49.891517 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:16:49.891524 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:49.891530 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:16:49.891538 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:49.891544 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:49.891550 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:49.891559 kernel: psci: Trusted OS migration not required Jan 13 20:16:49.891566 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:49.891572 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:49.891581 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:49.891587 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:49.891594 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:16:49.891600 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:49.891607 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:49.891613 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:49.891620 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:49.891626 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:49.891633 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:49.891639 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:49.891646 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:49.891653 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:49.891660 kernel: alternatives: applying boot alternatives Jan 13 20:16:49.891668 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:16:49.891675 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:49.891681 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:49.891688 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:49.891694 kernel: Fallback order for Node 0: 0 Jan 13 20:16:49.891701 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:16:49.891707 kernel: Policy zone: Normal Jan 13 20:16:49.891714 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:49.891720 kernel: software IO TLB: area num 2. Jan 13 20:16:49.891728 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:16:49.891735 kernel: Memory: 3881016K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 214984K reserved, 0K cma-reserved) Jan 13 20:16:49.891741 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:16:49.891748 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:49.891755 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:49.891762 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:16:49.891768 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:49.891775 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:49.891781 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:49.891788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:16:49.891794 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:49.891802 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:49.891809 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:49.891815 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:49.891822 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:49.891828 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:49.891835 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:49.891841 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:49.891848 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:49.891855 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:16:49.891861 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:16:49.891868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:49.891876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:49.891882 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:49.891889 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:49.891896 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:49.891902 kernel: Console: colour dummy device 80x25 Jan 13 20:16:49.891909 kernel: ACPI: Core revision 20230628 Jan 13 20:16:49.891916 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:49.891923 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:49.891930 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:49.891936 kernel: landlock: Up and running. Jan 13 20:16:49.891945 kernel: SELinux: Initializing. Jan 13 20:16:49.891952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.891959 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.891966 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:49.891973 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:49.891980 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:49.891987 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:49.891994 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:49.892001 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:49.892009 kernel: Remapping and enabling EFI services. Jan 13 20:16:49.892016 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:49.892023 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:49.892030 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:49.892036 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:16:49.892081 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:49.892089 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:49.892096 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:16:49.892102 kernel: SMP: Total of 2 processors activated. Jan 13 20:16:49.892109 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:49.892120 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:49.892128 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:49.892140 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:49.892148 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:49.892155 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:49.892163 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:49.892170 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:49.892177 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:49.892185 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:49.892193 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:49.892200 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:49.892207 kernel: devtmpfs: initialized Jan 13 20:16:49.892215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:49.892222 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:16:49.892230 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:49.892237 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:49.892246 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:16:49.892635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:49.892650 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:49.892659 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:49.892667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:49.892677 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:49.892685 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:49.892694 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:49.892703 kernel: cpuidle: using governor menu Jan 13 20:16:49.892715 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:49.892722 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:49.892730 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:49.892737 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:49.892744 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:49.892752 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:49.892759 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:16:49.892766 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:49.892773 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:49.892782 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:49.892790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:49.892797 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:49.892804 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:49.892811 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:49.892818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:49.892825 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:49.892833 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:49.892840 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:49.892849 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:49.892856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:49.892863 kernel: ACPI: Interpreter enabled Jan 13 20:16:49.892870 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:49.892877 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:49.892885 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:49.892892 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:49.892899 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:49.893119 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:49.893390 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:49.893483 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:49.893622 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:49.893696 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:49.893706 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:49.893713 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:49.893787 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:49.893853 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:49.893949 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:49.894024 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:49.894159 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:49.894240 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:16:49.894389 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:16:49.894474 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:49.894551 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.894616 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:16:49.894688 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.894751 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:16:49.894824 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.894894 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:16:49.894970 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.895050 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:16:49.895153 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.895222 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:16:49.895323 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.895439 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:16:49.895585 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.895723 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:16:49.895819 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.895886 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:16:49.896013 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.896133 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:16:49.896234 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:16:49.896385 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:16:49.896508 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:49.896644 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:16:49.896718 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:49.896860 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:49.896960 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:16:49.897028 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:16:49.897123 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:16:49.897202 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:16:49.898662 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:16:49.899439 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:16:49.899523 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:16:49.899611 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:16:49.899680 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:16:49.899766 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:16:49.899835 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:16:49.899904 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:49.899999 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:49.900160 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:16:49.900243 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:16:49.900339 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:49.900410 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:16:49.900474 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:49.900536 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:49.900611 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:16:49.900674 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:16:49.900737 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:16:49.900806 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:16:49.900869 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:49.900931 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:49.900998 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:16:49.901111 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:16:49.901189 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:16:49.902572 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:16:49.902664 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:16:49.902728 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:16:49.902798 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:16:49.902862 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:49.902924 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:49.903001 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:16:49.903125 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:49.903195 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:49.904393 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:16:49.904504 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:49.904567 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:49.904636 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:16:49.904752 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:49.904829 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:49.904897 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:16:49.904987 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:49.905149 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:16:49.905223 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:49.905380 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:16:49.905448 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:49.905523 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:16:49.905586 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:49.905651 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:16:49.905714 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:49.905779 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:16:49.905842 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:49.905959 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:16:49.906028 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:49.906138 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:16:49.906205 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:49.907389 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:16:49.907484 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:49.907552 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:16:49.907623 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:16:49.907690 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:16:49.907752 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:16:49.907817 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:16:49.907880 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:16:49.907947 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:16:49.908009 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:16:49.908095 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:16:49.908166 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:16:49.908231 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:16:49.908308 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:16:49.908374 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:16:49.908436 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:16:49.908502 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:16:49.908565 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:16:49.908628 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:16:49.908693 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:16:49.908755 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:16:49.908816 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:16:49.908881 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:16:49.908953 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:16:49.909019 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:49.909095 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:16:49.909159 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:16:49.909225 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:16:49.910742 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:16:49.910861 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:49.910938 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:16:49.911005 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:16:49.911099 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:16:49.911166 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:16:49.911229 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:49.911337 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:49.911408 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:16:49.911475 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:16:49.911539 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:16:49.911607 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:16:49.911671 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:49.911745 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:49.911810 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:16:49.911875 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:16:49.911938 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:16:49.912000 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:49.912087 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:16:49.912159 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:16:49.912222 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:16:49.912377 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:16:49.912444 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:49.912515 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:16:49.912579 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:16:49.912643 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:16:49.912704 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:16:49.912770 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:16:49.912838 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:49.912908 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:16:49.912972 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:16:49.913035 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:16:49.913145 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:16:49.913210 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:16:49.913290 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:16:49.913361 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:49.913427 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:16:49.913489 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:16:49.913549 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:16:49.913612 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:49.913677 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:16:49.913740 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:16:49.913801 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:16:49.913866 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:49.913931 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:49.913986 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:49.914052 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:49.914132 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:16:49.914192 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:16:49.914290 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:49.914373 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:16:49.914452 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:16:49.914515 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:49.914585 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:16:49.914644 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:16:49.914703 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:49.914773 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:16:49.914831 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:16:49.914891 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:49.914966 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:16:49.915028 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:16:49.915128 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:49.915201 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:16:49.916344 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:16:49.916436 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:49.916508 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:16:49.916567 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:16:49.916633 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:49.916702 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:16:49.916830 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:16:49.916894 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:49.916972 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:16:49.917031 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:16:49.917144 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:49.917161 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:49.917169 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:49.917176 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:49.917184 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:49.917191 kernel: iommu: Default domain type: Translated Jan 13 20:16:49.917199 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:49.917207 kernel: efivars: Registered efivars operations Jan 13 20:16:49.917214 kernel: vgaarb: loaded Jan 13 20:16:49.917222 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:49.917231 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:49.917239 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:49.917247 kernel: pnp: PnP ACPI init Jan 13 20:16:49.918413 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:49.918430 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:49.918438 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:49.918446 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:49.918454 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:49.918467 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:49.918475 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:49.918483 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:49.918490 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:49.918498 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.918505 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.918513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:49.918590 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.918601 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:49.918611 kernel: kvm [1]: HYP mode not available Jan 13 20:16:49.918618 kernel: Initialise system trusted keyrings Jan 13 20:16:49.918626 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:49.918634 kernel: Key type asymmetric registered Jan 13 20:16:49.918641 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:49.918648 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:49.918656 kernel: io scheduler mq-deadline registered Jan 13 20:16:49.918663 kernel: io scheduler kyber registered Jan 13 20:16:49.918670 kernel: io scheduler bfq registered Jan 13 20:16:49.918681 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:16:49.918748 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:16:49.918812 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:16:49.918876 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.918942 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:16:49.919004 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:16:49.919185 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.921317 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:16:49.921419 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:16:49.921486 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.921555 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:16:49.921619 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:16:49.921690 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.921759 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:16:49.921823 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:16:49.921886 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.921955 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:16:49.922021 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:16:49.922106 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.922177 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:16:49.922242 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:16:49.922320 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.922388 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:16:49.922453 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:16:49.922518 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.922529 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:16:49.922593 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:16:49.922660 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:16:49.922723 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.922733 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:49.922741 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:49.922751 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:49.922822 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.922902 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.922996 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.923011 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:49.923020 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:16:49.923114 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:16:49.923128 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:16:49.923135 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:49.923146 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:49.923154 kernel: nicpf, ver 1.0 Jan 13 20:16:49.923161 kernel: nicvf, ver 1.0 Jan 13 20:16:49.923240 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:49.924388 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:49 UTC (1736799409) Jan 13 20:16:49.924408 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:49.924417 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:49.924425 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:49.924440 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:49.924448 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:49.924456 kernel: Segment Routing with IPv6 Jan 13 20:16:49.924463 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:49.924470 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:49.924478 kernel: Key type dns_resolver registered Jan 13 20:16:49.924485 kernel: registered taskstats version 1 Jan 13 20:16:49.924493 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:49.924500 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:16:49.924509 kernel: Key type .fscrypt registered Jan 13 20:16:49.924517 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:49.924524 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:49.924532 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:49.924539 kernel: ima: No architecture policies found Jan 13 20:16:49.924546 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:49.924554 kernel: clk: Disabling unused clocks Jan 13 20:16:49.924562 kernel: Freeing unused kernel memory: 39936K Jan 13 20:16:49.924569 kernel: Run /init as init process Jan 13 20:16:49.924578 kernel: with arguments: Jan 13 20:16:49.924586 kernel: /init Jan 13 20:16:49.924593 kernel: with environment: Jan 13 20:16:49.924600 kernel: HOME=/ Jan 13 20:16:49.924607 kernel: TERM=linux Jan 13 20:16:49.924615 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:49.924625 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:49.924634 systemd[1]: Detected virtualization kvm. Jan 13 20:16:49.924644 systemd[1]: Detected architecture arm64. Jan 13 20:16:49.924652 systemd[1]: Running in initrd. Jan 13 20:16:49.924660 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:49.924667 systemd[1]: Hostname set to . Jan 13 20:16:49.924675 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:49.924683 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:49.924691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:49.924699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:49.924710 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:49.924719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:49.924741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:49.924749 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:49.924761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:49.924769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:49.924779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:49.924787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:49.924794 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:49.924802 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:49.924810 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:49.924818 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:49.924826 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:49.924834 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:49.924842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:49.924851 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:49.924859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:49.924867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:49.924875 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:49.924882 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:49.924890 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:49.924898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:49.924906 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:49.924916 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:49.924924 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:49.924932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:49.924939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:49.924947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:49.924955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:49.924963 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:49.924996 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 20:16:49.925017 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:49.925027 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:49.925035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:49.925055 systemd-journald[237]: Journal started Jan 13 20:16:49.925080 systemd-journald[237]: Runtime Journal (/run/log/journal/74668b682ba44ad487d1d0aebfad4d79) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:49.900118 systemd-modules-load[238]: Inserted module 'overlay' Jan 13 20:16:49.931459 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:49.932482 kernel: Bridge firewalling registered Jan 13 20:16:49.932204 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 13 20:16:49.934031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:49.936102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:49.943581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:49.948485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:49.950822 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:49.955978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:49.975583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:49.977096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:49.978803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:49.979817 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:49.987505 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:49.991515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:50.005785 dracut-cmdline[274]: dracut-dracut-053 Jan 13 20:16:50.010216 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:16:50.027381 systemd-resolved[275]: Positive Trust Anchors: Jan 13 20:16:50.027397 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:50.027427 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:50.033094 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 13 20:16:50.034944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:50.036150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:50.109317 kernel: SCSI subsystem initialized Jan 13 20:16:50.114282 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:50.122367 kernel: iscsi: registered transport (tcp) Jan 13 20:16:50.138405 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:50.138472 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:50.190436 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:50.195540 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:50.216378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:50.216473 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:50.216499 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:50.269368 kernel: raid6: neonx8 gen() 15675 MB/s Jan 13 20:16:50.286377 kernel: raid6: neonx4 gen() 15750 MB/s Jan 13 20:16:50.303299 kernel: raid6: neonx2 gen() 13176 MB/s Jan 13 20:16:50.320332 kernel: raid6: neonx1 gen() 10434 MB/s Jan 13 20:16:50.337307 kernel: raid6: int64x8 gen() 6755 MB/s Jan 13 20:16:50.354368 kernel: raid6: int64x4 gen() 7318 MB/s Jan 13 20:16:50.371310 kernel: raid6: int64x2 gen() 6074 MB/s Jan 13 20:16:50.388329 kernel: raid6: int64x1 gen() 5034 MB/s Jan 13 20:16:50.388477 kernel: raid6: using algorithm neonx4 gen() 15750 MB/s Jan 13 20:16:50.405330 kernel: raid6: .... xor() 12359 MB/s, rmw enabled Jan 13 20:16:50.405407 kernel: raid6: using neon recovery algorithm Jan 13 20:16:50.410310 kernel: xor: measuring software checksum speed Jan 13 20:16:50.410396 kernel: 8regs : 21664 MB/sec Jan 13 20:16:50.410410 kernel: 32regs : 21699 MB/sec Jan 13 20:16:50.410422 kernel: arm64_neon : 27841 MB/sec Jan 13 20:16:50.411311 kernel: xor: using function: arm64_neon (27841 MB/sec) Jan 13 20:16:50.460329 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:50.477405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:50.485720 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:50.497695 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 13 20:16:50.501109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:50.510562 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:50.523924 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 13 20:16:50.558574 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:50.565503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:50.617337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:50.623470 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:50.652719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:50.655770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:50.657397 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:50.658629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:50.667475 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:50.686214 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:50.714314 kernel: ACPI: bus type USB registered Jan 13 20:16:50.715421 kernel: usbcore: registered new interface driver usbfs Jan 13 20:16:50.715501 kernel: usbcore: registered new interface driver hub Jan 13 20:16:50.715515 kernel: usbcore: registered new device driver usb Jan 13 20:16:50.719601 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:16:50.733749 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:50.733842 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:50.739805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:50.741140 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:50.763655 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:50.766811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:50.767218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:50.768131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:50.778825 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:50.806353 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:16:50.806866 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:16:50.807061 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:50.807192 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:16:50.807404 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:16:50.807528 kernel: hub 1-0:1.0: USB hub found Jan 13 20:16:50.807702 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:16:50.807840 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:16:50.808014 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:16:50.811116 kernel: hub 2-0:1.0: USB hub found Jan 13 20:16:50.811247 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:16:50.812335 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:16:50.812513 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:16:50.812525 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:16:50.781280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:50.806682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:50.816479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:50.827705 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:16:50.843639 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:16:50.843767 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:16:50.843854 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:16:50.843946 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:16:50.844049 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:50.844061 kernel: GPT:17805311 != 80003071 Jan 13 20:16:50.844070 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:50.844079 kernel: GPT:17805311 != 80003071 Jan 13 20:16:50.844087 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:50.844096 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:50.844106 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:16:50.855691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:50.902300 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (515) Jan 13 20:16:50.905327 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (514) Jan 13 20:16:50.910731 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:16:50.918635 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:16:50.925203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:50.930609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:16:50.932497 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:16:50.940570 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:50.950821 disk-uuid[577]: Primary Header is updated. Jan 13 20:16:50.950821 disk-uuid[577]: Secondary Entries is updated. Jan 13 20:16:50.950821 disk-uuid[577]: Secondary Header is updated. Jan 13 20:16:50.957280 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:51.035295 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:16:51.275304 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:16:51.411000 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:16:51.411089 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:16:51.413281 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:16:51.466515 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:16:51.466791 kernel: usbcore: registered new interface driver usbhid Jan 13 20:16:51.466806 kernel: usbhid: USB HID core driver Jan 13 20:16:51.975274 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:51.977330 disk-uuid[578]: The operation has completed successfully. Jan 13 20:16:52.030476 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:52.031288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:52.052578 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:52.058431 sh[592]: Success Jan 13 20:16:52.073334 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:52.126293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:52.135883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:52.138289 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:52.160249 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:16:52.160330 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.160341 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:52.160351 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:52.160370 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:52.167350 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:16:52.169163 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:52.171109 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:52.177523 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:52.182444 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:52.196297 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:52.196371 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.196388 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:52.202186 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:52.202300 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:52.214706 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:52.216319 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:52.223102 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:52.233496 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:52.322102 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:52.330503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:52.332460 ignition[679]: Ignition 2.20.0 Jan 13 20:16:52.332470 ignition[679]: Stage: fetch-offline Jan 13 20:16:52.332507 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.332516 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.332662 ignition[679]: parsed url from cmdline: "" Jan 13 20:16:52.332665 ignition[679]: no config URL provided Jan 13 20:16:52.332669 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:52.335457 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:52.332676 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:52.332681 ignition[679]: failed to fetch config: resource requires networking Jan 13 20:16:52.332980 ignition[679]: Ignition finished successfully Jan 13 20:16:52.364506 systemd-networkd[779]: lo: Link UP Jan 13 20:16:52.364517 systemd-networkd[779]: lo: Gained carrier Jan 13 20:16:52.366382 systemd-networkd[779]: Enumeration completed Jan 13 20:16:52.366622 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:52.367341 systemd[1]: Reached target network.target - Network. Jan 13 20:16:52.368999 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.369003 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:52.370007 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.370010 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:52.370612 systemd-networkd[779]: eth0: Link UP Jan 13 20:16:52.370616 systemd-networkd[779]: eth0: Gained carrier Jan 13 20:16:52.370623 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.376551 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:16:52.376736 systemd-networkd[779]: eth1: Link UP Jan 13 20:16:52.376739 systemd-networkd[779]: eth1: Gained carrier Jan 13 20:16:52.376750 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.389898 ignition[782]: Ignition 2.20.0 Jan 13 20:16:52.389908 ignition[782]: Stage: fetch Jan 13 20:16:52.390129 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.390139 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.390228 ignition[782]: parsed url from cmdline: "" Jan 13 20:16:52.390231 ignition[782]: no config URL provided Jan 13 20:16:52.390235 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:52.390242 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:52.390343 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:16:52.391077 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:16:52.402383 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:52.435391 systemd-networkd[779]: eth0: DHCPv4 address 138.199.153.210/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:52.591299 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:16:52.596602 ignition[782]: GET result: OK Jan 13 20:16:52.596693 ignition[782]: parsing config with SHA512: 4d0a62ce4ecd9e6a2f2cbf94705979d8cc1d66c624476a3ac45a9858a2b28a23fdce0150013e248ddd624f8d4d553e2bc6eced2a28cbf1bdd8e7066356898a41 Jan 13 20:16:52.601945 unknown[782]: fetched base config from "system" Jan 13 20:16:52.601955 unknown[782]: fetched base config from "system" Jan 13 20:16:52.602373 ignition[782]: fetch: fetch complete Jan 13 20:16:52.601961 unknown[782]: fetched user config from "hetzner" Jan 13 20:16:52.602377 ignition[782]: fetch: fetch passed Jan 13 20:16:52.605678 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:16:52.602426 ignition[782]: Ignition finished successfully Jan 13 20:16:52.611605 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:52.627229 ignition[790]: Ignition 2.20.0 Jan 13 20:16:52.627241 ignition[790]: Stage: kargs Jan 13 20:16:52.627442 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.627453 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.628468 ignition[790]: kargs: kargs passed Jan 13 20:16:52.628523 ignition[790]: Ignition finished successfully Jan 13 20:16:52.631154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:52.635531 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:52.651582 ignition[796]: Ignition 2.20.0 Jan 13 20:16:52.651599 ignition[796]: Stage: disks Jan 13 20:16:52.651859 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.651874 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.653450 ignition[796]: disks: disks passed Jan 13 20:16:52.655330 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:52.653531 ignition[796]: Ignition finished successfully Jan 13 20:16:52.656576 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:52.657438 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:52.658347 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:52.659451 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:52.660577 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:52.666515 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:52.689844 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:16:52.693611 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:52.700407 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:52.743365 kernel: EXT4-fs (sda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:52.744984 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:52.746954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:52.762571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:52.766536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:52.770516 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:16:52.771167 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:52.771199 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:52.780284 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Jan 13 20:16:52.782596 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:52.782641 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.783830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:52.788686 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:52.798538 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:52.801354 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:52.801398 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:52.813060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:52.847212 coreos-metadata[814]: Jan 13 20:16:52.846 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:16:52.849086 coreos-metadata[814]: Jan 13 20:16:52.848 INFO Fetch successful Jan 13 20:16:52.851644 coreos-metadata[814]: Jan 13 20:16:52.851 INFO wrote hostname ci-4186-1-0-7-7ab547e2a5 to /sysroot/etc/hostname Jan 13 20:16:52.854784 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:52.855910 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:52.861395 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:52.867076 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:52.872401 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:52.981894 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:52.988455 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:52.991510 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:53.001329 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:53.020304 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:53.025338 ignition[929]: INFO : Ignition 2.20.0 Jan 13 20:16:53.025338 ignition[929]: INFO : Stage: mount Jan 13 20:16:53.026368 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:53.026368 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:53.029092 ignition[929]: INFO : mount: mount passed Jan 13 20:16:53.029092 ignition[929]: INFO : Ignition finished successfully Jan 13 20:16:53.028557 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:53.038460 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:53.158624 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:53.166928 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:53.178336 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (941) Jan 13 20:16:53.180450 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:53.180593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:53.180620 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:53.183669 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:53.183738 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:53.186840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:53.212749 ignition[958]: INFO : Ignition 2.20.0 Jan 13 20:16:53.212749 ignition[958]: INFO : Stage: files Jan 13 20:16:53.215440 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:53.215440 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:53.215440 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:53.218882 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:53.218882 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:53.223228 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:53.224232 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:53.224232 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:53.223799 unknown[958]: wrote ssh authorized keys file for user: core Jan 13 20:16:53.227493 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:53.227493 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:53.317471 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:16:53.421380 systemd-networkd[779]: eth1: Gained IPv6LL Jan 13 20:16:53.578875 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:53.580824 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:53.580824 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:53.741480 systemd-networkd[779]: eth0: Gained IPv6LL Jan 13 20:16:54.170941 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:54.298726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:54.309632 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:54.309632 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:54.309632 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:54.309632 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:54.309632 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:54.309632 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:16:54.847847 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:16:55.315386 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:55.315386 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:55.317975 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:55.331768 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:55.331768 ignition[958]: INFO : files: files passed Jan 13 20:16:55.331768 ignition[958]: INFO : Ignition finished successfully Jan 13 20:16:55.321459 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:55.328794 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:55.332925 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:55.337830 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:55.337938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:55.345772 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:55.345772 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:55.348322 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:55.351351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:55.353179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:55.365547 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:55.395287 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:55.395415 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:55.397214 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:55.397872 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:55.399232 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:55.400449 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:55.419355 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:55.425469 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:55.437677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:55.439072 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:55.440450 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:55.441460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:55.441590 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:55.443540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:55.444739 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:55.445314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:55.446600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:55.447981 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:55.449538 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:55.450591 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:55.451666 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:55.452774 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:55.453767 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:55.454670 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:55.454798 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:55.456139 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:55.456844 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:55.457915 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:55.461377 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:55.463305 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:55.463467 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:55.465773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:55.465924 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:55.467671 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:55.467767 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:55.468868 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:16:55.468961 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:55.477912 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:55.480560 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:55.482411 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:55.487387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:55.489407 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:55.489578 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:55.495515 ignition[1010]: INFO : Ignition 2.20.0 Jan 13 20:16:55.495515 ignition[1010]: INFO : Stage: umount Jan 13 20:16:55.495515 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:55.495515 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:55.493441 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:55.503045 ignition[1010]: INFO : umount: umount passed Jan 13 20:16:55.503045 ignition[1010]: INFO : Ignition finished successfully Jan 13 20:16:55.493553 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:55.499749 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:55.501285 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:55.503978 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:55.504110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:55.508870 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:55.509340 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:55.509380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:55.513404 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:55.513491 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:55.517134 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:16:55.517179 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:16:55.518210 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:55.519282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:55.519336 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:55.521545 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:55.522289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:55.522343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:55.523906 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:55.524882 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:55.525377 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:55.525419 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:55.526228 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:55.526271 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:55.527160 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:55.527209 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:55.530361 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:55.530421 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:55.534304 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:55.535772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:55.541320 systemd-networkd[779]: eth1: DHCPv6 lease lost Jan 13 20:16:55.545323 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 13 20:16:55.549779 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:55.549944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:55.554249 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:55.554360 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:55.557920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:55.557979 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:55.564603 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:55.567957 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:55.568067 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:55.569411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:55.569462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:55.570692 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:55.570737 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:55.572627 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:55.572672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:55.574599 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:55.585954 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:55.586888 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:55.587947 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:55.588072 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:55.591693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:55.591756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:55.595149 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:55.595342 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:55.596892 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:55.596942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:55.597797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:55.597834 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:55.598910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:55.598964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:55.600564 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:55.600615 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:55.602137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:55.602189 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:55.610677 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:55.611279 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:55.611347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:55.612048 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:55.612099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:55.618310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:55.618430 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:55.619892 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:55.631518 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:55.646876 systemd[1]: Switching root. Jan 13 20:16:55.678426 systemd-journald[237]: Journal stopped Jan 13 20:16:56.610203 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:56.612332 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:56.612359 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:56.612369 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:56.612378 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:56.612389 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:56.612399 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:56.612409 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:56.612421 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:56.612431 kernel: audit: type=1403 audit(1736799415.812:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:56.612447 systemd[1]: Successfully loaded SELinux policy in 35.219ms. Jan 13 20:16:56.612460 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.908ms. Jan 13 20:16:56.612471 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:56.612482 systemd[1]: Detected virtualization kvm. Jan 13 20:16:56.612493 systemd[1]: Detected architecture arm64. Jan 13 20:16:56.612503 systemd[1]: Detected first boot. Jan 13 20:16:56.612515 systemd[1]: Hostname set to . Jan 13 20:16:56.612525 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:56.612536 zram_generator::config[1052]: No configuration found. Jan 13 20:16:56.612547 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:56.612558 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:16:56.612568 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:16:56.612578 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:56.612592 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:56.612605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:56.612615 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:56.612625 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:56.612635 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:56.612645 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:56.612655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:56.612665 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:56.612675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:56.612686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:56.612697 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:56.612707 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:56.612717 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:56.612727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:56.612737 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:56.612747 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:56.612757 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:16:56.612768 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:16:56.612779 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:56.612794 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:56.612804 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:56.612818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:56.612828 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:56.612839 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:56.612849 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:56.612862 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:56.612872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:56.612882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:56.612892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:56.612903 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:56.612913 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:56.612924 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:56.612934 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:56.612945 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:56.612956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:56.612966 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:56.612977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:56.613028 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:56.613044 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:56.613060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:56.613073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:56.613083 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:56.613093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:56.613103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:56.613114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:56.613124 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:56.613135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:56.613145 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:56.613157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:16:56.613167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:16:56.613178 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:16:56.613188 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:16:56.613198 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:56.613208 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:56.613219 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:56.613229 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:56.613238 kernel: loop: module loaded Jan 13 20:16:56.613250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:56.613412 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:16:56.613425 systemd[1]: Stopped verity-setup.service. Jan 13 20:16:56.613435 kernel: fuse: init (API version 7.39) Jan 13 20:16:56.613445 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:56.613455 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:56.613465 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:56.613476 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:56.613486 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:56.613500 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:56.613511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:56.613521 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:56.613531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:56.613542 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:56.613553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:56.613563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:56.613573 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:56.613583 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:56.613594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:56.613604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:56.613616 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:56.613626 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:56.613637 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:56.613647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:56.613657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:56.613668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:56.613678 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:56.613688 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:56.613700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:56.613711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:56.613721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:56.613732 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:56.613774 systemd-journald[1126]: Collecting audit messages is disabled. Jan 13 20:16:56.613803 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:56.613814 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:56.613826 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:56.613838 systemd-journald[1126]: Journal started Jan 13 20:16:56.613864 systemd-journald[1126]: Runtime Journal (/run/log/journal/74668b682ba44ad487d1d0aebfad4d79) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:56.318408 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:56.343127 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:16:56.616387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:56.616413 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:56.343784 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:16:56.619542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:56.627604 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:56.636270 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:56.636340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:56.646273 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:56.646342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:56.654545 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:56.657313 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:56.663573 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:56.667363 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:56.677430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:56.678975 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:56.699452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:56.708523 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:56.719289 kernel: loop0: detected capacity change from 0 to 189592 Jan 13 20:16:56.732757 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:56.741460 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:56.751456 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:56.756457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:56.757507 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:56.766394 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:56.780782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:56.783698 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:56.786226 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:56.797077 systemd-journald[1126]: Time spent on flushing to /var/log/journal/74668b682ba44ad487d1d0aebfad4d79 is 38.709ms for 1139 entries. Jan 13 20:16:56.797077 systemd-journald[1126]: System Journal (/var/log/journal/74668b682ba44ad487d1d0aebfad4d79) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:16:56.857343 systemd-journald[1126]: Received client request to flush runtime journal. Jan 13 20:16:56.857415 kernel: loop1: detected capacity change from 0 to 8 Jan 13 20:16:56.858772 kernel: loop2: detected capacity change from 0 to 113552 Jan 13 20:16:56.810701 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:16:56.844853 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 13 20:16:56.844868 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 13 20:16:56.852823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:56.861717 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:56.874297 kernel: loop3: detected capacity change from 0 to 116784 Jan 13 20:16:56.938284 kernel: loop4: detected capacity change from 0 to 189592 Jan 13 20:16:56.970349 kernel: loop5: detected capacity change from 0 to 8 Jan 13 20:16:56.971404 kernel: loop6: detected capacity change from 0 to 113552 Jan 13 20:16:56.987317 kernel: loop7: detected capacity change from 0 to 116784 Jan 13 20:16:57.005668 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:16:57.006654 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 13 20:16:57.017072 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:57.017365 systemd[1]: Reloading... Jan 13 20:16:57.133356 zram_generator::config[1220]: No configuration found. Jan 13 20:16:57.250129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:57.252331 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:57.296890 systemd[1]: Reloading finished in 279 ms. Jan 13 20:16:57.320396 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:57.321694 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:57.334537 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:57.337413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:57.353449 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:57.353474 systemd[1]: Reloading... Jan 13 20:16:57.381734 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:57.381955 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:57.382655 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:57.382848 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 13 20:16:57.382895 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 13 20:16:57.385862 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:57.385878 systemd-tmpfiles[1257]: Skipping /boot Jan 13 20:16:57.403014 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:57.403030 systemd-tmpfiles[1257]: Skipping /boot Jan 13 20:16:57.424284 zram_generator::config[1284]: No configuration found. Jan 13 20:16:57.548092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:57.594079 systemd[1]: Reloading finished in 240 ms. Jan 13 20:16:57.612629 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:57.618862 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:57.633559 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:57.639608 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:57.645560 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:57.650600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:57.662718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:57.675592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:57.695421 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:57.699239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.703315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:57.713877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:57.720627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:57.721688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:57.723438 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 13 20:16:57.725188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:57.727079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:57.731453 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:57.744217 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:57.754929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:57.757355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:57.759227 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:57.760455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:57.763126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.773712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:57.774530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:57.774657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:57.781613 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:57.782923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:57.789566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.796584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:57.804473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:57.811591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:57.812621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:57.817740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:57.829336 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:57.836923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:57.837133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:57.845185 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:16:57.847341 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:57.863867 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:57.866318 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:57.868588 augenrules[1381]: No rules Jan 13 20:16:57.870638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:57.870869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:57.881570 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:57.883555 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:57.883793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:57.889719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:57.902418 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:57.902633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:57.903710 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:57.904355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:57.908378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:57.974589 systemd-networkd[1374]: lo: Link UP Jan 13 20:16:57.974916 systemd-networkd[1374]: lo: Gained carrier Jan 13 20:16:57.978866 systemd-networkd[1374]: Enumeration completed Jan 13 20:16:57.979787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:57.990535 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:58.014617 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:16:58.015391 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:16:58.041460 systemd-resolved[1330]: Positive Trust Anchors: Jan 13 20:16:58.041481 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:58.041513 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:58.046934 systemd-resolved[1330]: Using system hostname 'ci-4186-1-0-7-7ab547e2a5'. Jan 13 20:16:58.050147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:58.051557 systemd[1]: Reached target network.target - Network. Jan 13 20:16:58.052732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:58.054878 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:16:58.091698 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.092072 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:58.094324 systemd-networkd[1374]: eth0: Link UP Jan 13 20:16:58.094334 systemd-networkd[1374]: eth0: Gained carrier Jan 13 20:16:58.094359 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.116338 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:16:58.120880 systemd-networkd[1374]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.120901 systemd-networkd[1374]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:58.123617 systemd-networkd[1374]: eth1: Link UP Jan 13 20:16:58.123630 systemd-networkd[1374]: eth1: Gained carrier Jan 13 20:16:58.123652 systemd-networkd[1374]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.145785 systemd-networkd[1374]: eth0: DHCPv4 address 138.199.153.210/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:58.150523 systemd-networkd[1374]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:58.151482 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jan 13 20:16:58.177843 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1359) Jan 13 20:16:58.195477 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 13 20:16:58.195602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:58.203553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:58.222581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:58.227513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:58.228196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:58.228242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:58.229673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:58.229841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:58.230849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:58.231028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:58.236614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:58.271668 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:58.271842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:58.273587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:58.286955 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:16:58.287091 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:16:58.287109 kernel: [drm] features: -context_init Jan 13 20:16:58.288290 kernel: [drm] number of scanouts: 1 Jan 13 20:16:58.288401 kernel: [drm] number of cap sets: 0 Jan 13 20:16:58.289470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:58.291630 systemd-timesyncd[1378]: Contacted time server 78.47.56.71:123 (0.flatcar.pool.ntp.org). Jan 13 20:16:58.291790 systemd-timesyncd[1378]: Initial clock synchronization to Mon 2025-01-13 20:16:58.568327 UTC. Jan 13 20:16:58.294622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:58.298306 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:16:58.301452 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:58.309220 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:16:58.314297 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:16:58.330323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:58.330624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:58.339345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:58.342299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:58.404307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:58.422066 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:58.428673 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:58.446678 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:58.473448 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:58.475389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:58.476332 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:58.477302 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:16:58.478244 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:16:58.479708 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:16:58.480758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:16:58.481744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:16:58.482877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:16:58.482923 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:58.483772 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:58.486693 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:16:58.488809 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:16:58.495046 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:16:58.497718 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:58.499114 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:16:58.499924 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:58.500507 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:58.501113 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:58.501151 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:58.504407 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:16:58.508583 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:16:58.512283 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:58.514333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:16:58.523500 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:16:58.529864 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:16:58.530635 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:16:58.537338 jq[1457]: false Jan 13 20:16:58.540589 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:16:58.542374 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:16:58.545560 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:16:58.549459 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:16:58.552198 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:16:58.558481 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:16:58.559774 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:16:58.561324 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:16:58.564493 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:16:58.567288 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:16:58.568867 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:58.572645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:16:58.572834 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:16:58.587581 coreos-metadata[1455]: Jan 13 20:16:58.587 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:16:58.592275 coreos-metadata[1455]: Jan 13 20:16:58.591 INFO Fetch successful Jan 13 20:16:58.595483 coreos-metadata[1455]: Jan 13 20:16:58.594 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:16:58.603506 coreos-metadata[1455]: Jan 13 20:16:58.600 INFO Fetch successful Jan 13 20:16:58.619102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:16:58.619324 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:16:58.623812 jq[1468]: true Jan 13 20:16:58.628836 extend-filesystems[1458]: Found loop4 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found loop5 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found loop6 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found loop7 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda1 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda2 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda3 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found usr Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda4 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda6 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda7 Jan 13 20:16:58.630812 extend-filesystems[1458]: Found sda9 Jan 13 20:16:58.630812 extend-filesystems[1458]: Checking size of /dev/sda9 Jan 13 20:16:58.650916 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:16:58.656583 tar[1475]: linux-arm64/helm Jan 13 20:16:58.651166 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:16:58.656754 dbus-daemon[1456]: [system] SELinux support is enabled Jan 13 20:16:58.658375 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:16:58.661200 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:16:58.662291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:16:58.662318 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:16:58.664377 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:16:58.664403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:16:58.666311 update_engine[1467]: I20250113 20:16:58.665184 1467 main.cc:92] Flatcar Update Engine starting Jan 13 20:16:58.670463 jq[1492]: true Jan 13 20:16:58.691269 update_engine[1467]: I20250113 20:16:58.689209 1467 update_check_scheduler.cc:74] Next update check in 4m41s Jan 13 20:16:58.692107 extend-filesystems[1458]: Resized partition /dev/sda9 Jan 13 20:16:58.698437 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:16:58.705504 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:16:58.711314 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:16:58.722928 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:16:58.790358 systemd-logind[1466]: New seat seat0. Jan 13 20:16:58.797852 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:16:58.797885 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:16:58.798163 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:16:58.823593 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:16:58.825101 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:16:58.838423 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:58.839528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:16:58.858686 systemd[1]: Starting sshkeys.service... Jan 13 20:16:58.907457 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1401) Jan 13 20:16:58.907533 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:16:58.934291 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:16:58.938823 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:16:58.940619 extend-filesystems[1501]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:16:58.940619 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:16:58.940619 extend-filesystems[1501]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:16:58.945238 extend-filesystems[1458]: Resized filesystem in /dev/sda9 Jan 13 20:16:58.945238 extend-filesystems[1458]: Found sr0 Jan 13 20:16:58.945809 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:16:58.946104 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:16:58.986177 containerd[1486]: time="2025-01-13T20:16:58.986056320Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:16:59.022257 coreos-metadata[1536]: Jan 13 20:16:59.021 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:16:59.025589 coreos-metadata[1536]: Jan 13 20:16:59.025 INFO Fetch successful Jan 13 20:16:59.028089 unknown[1536]: wrote ssh authorized keys file for user: core Jan 13 20:16:59.034594 containerd[1486]: time="2025-01-13T20:16:59.034535064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.036468 containerd[1486]: time="2025-01-13T20:16:59.036419205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.036572 containerd[1486]: time="2025-01-13T20:16:59.036557759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.036613711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.036781648Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.036799691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.036871741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.036887674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.037050645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.037064675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.037077090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.037087850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.037156506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.037777 containerd[1486]: time="2025-01-13T20:16:59.037392934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.038031 containerd[1486]: time="2025-01-13T20:16:59.037497222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.038031 containerd[1486]: time="2025-01-13T20:16:59.037511086Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:16:59.038031 containerd[1486]: time="2025-01-13T20:16:59.037590254Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:16:59.038031 containerd[1486]: time="2025-01-13T20:16:59.037628783Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:16:59.042547 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.045856669Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.045935175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.045950901Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.045968406Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.045983925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046153394Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046399382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046501228Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046530445Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046556931Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046572119Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046585693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046598357Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047322 containerd[1486]: time="2025-01-13T20:16:59.046611558Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046627326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046643176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046656336Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046668172Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046691016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046708191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046720564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046734594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046746636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046763563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046775730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046788766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046802091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047658 containerd[1486]: time="2025-01-13T20:16:59.046816121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046829653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046842400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046855353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046870251Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046890778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046903152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.046914036Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047085656Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047105769Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047117315Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047128240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047137303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047149429Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:16:59.047920 containerd[1486]: time="2025-01-13T20:16:59.047158782Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:16:59.048153 containerd[1486]: time="2025-01-13T20:16:59.047170038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.050956 containerd[1486]: time="2025-01-13T20:16:59.050206026Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:16:59.050956 containerd[1486]: time="2025-01-13T20:16:59.050297527Z" level=info msg="Connect containerd service" Jan 13 20:16:59.050956 containerd[1486]: time="2025-01-13T20:16:59.050683683Z" level=info msg="using legacy CRI server" Jan 13 20:16:59.050956 containerd[1486]: time="2025-01-13T20:16:59.050703340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:16:59.051676 containerd[1486]: time="2025-01-13T20:16:59.051497255Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.052878081Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.053076891Z" level=info msg="Start subscribing containerd event" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.053121420Z" level=info msg="Start recovering state" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.053191815Z" level=info msg="Start event monitor" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.053203485Z" level=info msg="Start snapshots syncer" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.053213417Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:16:59.053812 containerd[1486]: time="2025-01-13T20:16:59.053221984Z" level=info msg="Start streaming server" Jan 13 20:16:59.054318 containerd[1486]: time="2025-01-13T20:16:59.054243636Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:16:59.054444 containerd[1486]: time="2025-01-13T20:16:59.054421588Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:16:59.055307 containerd[1486]: time="2025-01-13T20:16:59.055258749Z" level=info msg="containerd successfully booted in 0.074963s" Jan 13 20:16:59.055450 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:16:59.067743 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:59.070800 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:16:59.078346 systemd[1]: Finished sshkeys.service. Jan 13 20:16:59.319820 tar[1475]: linux-arm64/LICENSE Jan 13 20:16:59.319820 tar[1475]: linux-arm64/README.md Jan 13 20:16:59.330642 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:16:59.437483 systemd-networkd[1374]: eth1: Gained IPv6LL Jan 13 20:16:59.441016 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:59.443713 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:59.454557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:59.456586 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:16:59.505776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:16:59.629708 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 13 20:16:59.911422 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:16:59.932978 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:16:59.941110 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:16:59.949095 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:16:59.950352 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:16:59.957405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:16:59.968227 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:16:59.976331 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:16:59.985865 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:16:59.988278 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:17:00.242040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:00.243475 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:17:00.248607 systemd[1]: Startup finished in 811ms (kernel) + 6.119s (initrd) + 4.471s (userspace) = 11.402s. Jan 13 20:17:00.253238 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:00.259579 agetty[1580]: failed to open credentials directory Jan 13 20:17:00.261419 agetty[1579]: failed to open credentials directory Jan 13 20:17:00.824046 kubelet[1586]: E0113 20:17:00.823952 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:00.826445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:00.826594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:11.077492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:17:11.091768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:11.211349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:11.226841 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:11.283785 kubelet[1605]: E0113 20:17:11.283589 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:11.286575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:11.286710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:21.336164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:21.341586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:21.454219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:21.459408 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:21.513287 kubelet[1619]: E0113 20:17:21.513156 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:21.515240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:21.515392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:31.586658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:17:31.597634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:31.707642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:31.712622 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:31.760784 kubelet[1635]: E0113 20:17:31.760681 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:31.763767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:31.764072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:41.836108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:17:41.843584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:41.992559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:41.993007 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:42.037557 kubelet[1649]: E0113 20:17:42.037482 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:42.039994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:42.040274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:43.888027 update_engine[1467]: I20250113 20:17:43.887840 1467 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:43.945292 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1665) Jan 13 20:17:44.020421 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1664) Jan 13 20:17:44.073428 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1664) Jan 13 20:17:52.085928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:17:52.096617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:52.218365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:52.223497 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:52.268906 kubelet[1685]: E0113 20:17:52.268809 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:52.271476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:52.271614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:02.335996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:18:02.342918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:02.480981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:02.486890 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:02.530567 kubelet[1699]: E0113 20:18:02.530483 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:02.536006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:02.536328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:12.586043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:18:12.593625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:12.703101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:12.709212 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:12.755397 kubelet[1715]: E0113 20:18:12.755327 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:12.758508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:12.758866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:22.836151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:18:22.842699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:22.967230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:22.972681 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:23.020863 kubelet[1730]: E0113 20:18:23.020763 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:23.023890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:23.024035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:33.085928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:18:33.098357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:33.233636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:33.247899 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:33.290802 kubelet[1744]: E0113 20:18:33.290734 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:33.293963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:33.294165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:43.336320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:18:43.343619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:43.459383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:43.473852 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:43.523568 kubelet[1759]: E0113 20:18:43.523439 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:43.527350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:43.527831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:49.938616 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:18:49.950669 systemd[1]: Started sshd@0-138.199.153.210:22-139.178.89.65:57320.service - OpenSSH per-connection server daemon (139.178.89.65:57320). Jan 13 20:18:50.953346 sshd[1768]: Accepted publickey for core from 139.178.89.65 port 57320 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:50.956014 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:50.965485 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:18:50.975760 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:18:50.982352 systemd-logind[1466]: New session 1 of user core. Jan 13 20:18:50.994414 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:18:51.005198 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:18:51.009990 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:18:51.117023 systemd[1772]: Queued start job for default target default.target. Jan 13 20:18:51.126983 systemd[1772]: Created slice app.slice - User Application Slice. Jan 13 20:18:51.127216 systemd[1772]: Reached target paths.target - Paths. Jan 13 20:18:51.127410 systemd[1772]: Reached target timers.target - Timers. Jan 13 20:18:51.130121 systemd[1772]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:18:51.143612 systemd[1772]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:18:51.143778 systemd[1772]: Reached target sockets.target - Sockets. Jan 13 20:18:51.143800 systemd[1772]: Reached target basic.target - Basic System. Jan 13 20:18:51.143884 systemd[1772]: Reached target default.target - Main User Target. Jan 13 20:18:51.143943 systemd[1772]: Startup finished in 126ms. Jan 13 20:18:51.144151 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:18:51.145692 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:18:51.844641 systemd[1]: Started sshd@1-138.199.153.210:22-139.178.89.65:60956.service - OpenSSH per-connection server daemon (139.178.89.65:60956). Jan 13 20:18:52.828247 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 60956 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:52.830067 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:52.837345 systemd-logind[1466]: New session 2 of user core. Jan 13 20:18:52.843646 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:18:53.508956 sshd[1785]: Connection closed by 139.178.89.65 port 60956 Jan 13 20:18:53.508201 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:53.513813 systemd[1]: sshd@1-138.199.153.210:22-139.178.89.65:60956.service: Deactivated successfully. Jan 13 20:18:53.517094 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:18:53.520050 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:18:53.521720 systemd-logind[1466]: Removed session 2. Jan 13 20:18:53.586577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:18:53.597561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:53.687651 systemd[1]: Started sshd@2-138.199.153.210:22-139.178.89.65:60968.service - OpenSSH per-connection server daemon (139.178.89.65:60968). Jan 13 20:18:53.733848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:53.743976 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:53.784978 kubelet[1800]: E0113 20:18:53.784776 1800 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:53.788072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:53.788441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:54.682083 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 60968 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:54.685905 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:54.693929 systemd-logind[1466]: New session 3 of user core. Jan 13 20:18:54.705079 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:18:55.365085 sshd[1807]: Connection closed by 139.178.89.65 port 60968 Jan 13 20:18:55.365737 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:55.369989 systemd[1]: sshd@2-138.199.153.210:22-139.178.89.65:60968.service: Deactivated successfully. Jan 13 20:18:55.372533 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:18:55.373464 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:18:55.374805 systemd-logind[1466]: Removed session 3. Jan 13 20:18:55.539817 systemd[1]: Started sshd@3-138.199.153.210:22-139.178.89.65:60970.service - OpenSSH per-connection server daemon (139.178.89.65:60970). Jan 13 20:18:56.510417 sshd[1812]: Accepted publickey for core from 139.178.89.65 port 60970 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:56.512591 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:56.520142 systemd-logind[1466]: New session 4 of user core. Jan 13 20:18:56.525653 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:18:57.184035 sshd[1814]: Connection closed by 139.178.89.65 port 60970 Jan 13 20:18:57.184793 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:57.190936 systemd[1]: sshd@3-138.199.153.210:22-139.178.89.65:60970.service: Deactivated successfully. Jan 13 20:18:57.194187 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:18:57.196797 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:18:57.199861 systemd-logind[1466]: Removed session 4. Jan 13 20:18:57.361723 systemd[1]: Started sshd@4-138.199.153.210:22-139.178.89.65:60972.service - OpenSSH per-connection server daemon (139.178.89.65:60972). Jan 13 20:18:58.342486 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 60972 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:58.344755 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:58.349843 systemd-logind[1466]: New session 5 of user core. Jan 13 20:18:58.364798 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:18:58.873317 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:18:58.873603 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:58.894629 sudo[1822]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:59.054622 sshd[1821]: Connection closed by 139.178.89.65 port 60972 Jan 13 20:18:59.055026 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:59.061923 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:18:59.062760 systemd[1]: sshd@4-138.199.153.210:22-139.178.89.65:60972.service: Deactivated successfully. Jan 13 20:18:59.065848 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:18:59.066944 systemd-logind[1466]: Removed session 5. Jan 13 20:18:59.235762 systemd[1]: Started sshd@5-138.199.153.210:22-139.178.89.65:60988.service - OpenSSH per-connection server daemon (139.178.89.65:60988). Jan 13 20:19:00.228812 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 60988 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:19:00.230622 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:19:00.238143 systemd-logind[1466]: New session 6 of user core. Jan 13 20:19:00.245656 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:19:00.756574 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:19:00.756914 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:19:00.761034 sudo[1831]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:00.766691 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:19:00.767031 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:19:00.783024 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:19:00.821195 augenrules[1853]: No rules Jan 13 20:19:00.821941 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:19:00.822166 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:19:00.823507 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:00.984893 sshd[1829]: Connection closed by 139.178.89.65 port 60988 Jan 13 20:19:00.985628 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 13 20:19:00.989100 systemd[1]: sshd@5-138.199.153.210:22-139.178.89.65:60988.service: Deactivated successfully. Jan 13 20:19:00.991038 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:19:00.992937 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:19:00.994211 systemd-logind[1466]: Removed session 6. Jan 13 20:19:01.156478 systemd[1]: Started sshd@6-138.199.153.210:22-139.178.89.65:32770.service - OpenSSH per-connection server daemon (139.178.89.65:32770). Jan 13 20:19:02.154284 sshd[1861]: Accepted publickey for core from 139.178.89.65 port 32770 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:19:02.156535 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:19:02.162160 systemd-logind[1466]: New session 7 of user core. Jan 13 20:19:02.173602 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:19:02.674432 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:19:02.675100 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:19:02.993203 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:19:02.993281 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:19:03.237841 dockerd[1883]: time="2025-01-13T20:19:03.237642101Z" level=info msg="Starting up" Jan 13 20:19:03.343762 systemd[1]: var-lib-docker-metacopy\x2dcheck2620235339-merged.mount: Deactivated successfully. Jan 13 20:19:03.351617 dockerd[1883]: time="2025-01-13T20:19:03.351565307Z" level=info msg="Loading containers: start." Jan 13 20:19:03.539307 kernel: Initializing XFRM netlink socket Jan 13 20:19:03.633047 systemd-networkd[1374]: docker0: Link UP Jan 13 20:19:03.674416 dockerd[1883]: time="2025-01-13T20:19:03.673866728Z" level=info msg="Loading containers: done." Jan 13 20:19:03.689895 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3801205079-merged.mount: Deactivated successfully. Jan 13 20:19:03.699379 dockerd[1883]: time="2025-01-13T20:19:03.698630149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:19:03.699379 dockerd[1883]: time="2025-01-13T20:19:03.698798827Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:19:03.699379 dockerd[1883]: time="2025-01-13T20:19:03.699052264Z" level=info msg="Daemon has completed initialization" Jan 13 20:19:03.746402 dockerd[1883]: time="2025-01-13T20:19:03.746331049Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:19:03.746879 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:19:03.835968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:19:03.842514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:03.957592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:03.958511 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:04.001366 kubelet[2077]: E0113 20:19:04.001310 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:04.005827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:04.005973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:04.897040 containerd[1486]: time="2025-01-13T20:19:04.896952191Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:19:05.556983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420121941.mount: Deactivated successfully. Jan 13 20:19:06.398150 containerd[1486]: time="2025-01-13T20:19:06.397998840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.400455 containerd[1486]: time="2025-01-13T20:19:06.400344629Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615677" Jan 13 20:19:06.401113 containerd[1486]: time="2025-01-13T20:19:06.401033386Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.404093 containerd[1486]: time="2025-01-13T20:19:06.404056812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.405246 containerd[1486]: time="2025-01-13T20:19:06.405075767Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.508070977s" Jan 13 20:19:06.405246 containerd[1486]: time="2025-01-13T20:19:06.405114926Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 20:19:06.406190 containerd[1486]: time="2025-01-13T20:19:06.406157322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:19:07.520297 containerd[1486]: time="2025-01-13T20:19:07.518459033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.520297 containerd[1486]: time="2025-01-13T20:19:07.519993949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470116" Jan 13 20:19:07.521535 containerd[1486]: time="2025-01-13T20:19:07.521486224Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.525101 containerd[1486]: time="2025-01-13T20:19:07.525052574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.526452 containerd[1486]: time="2025-01-13T20:19:07.526414290Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.120213769s" Jan 13 20:19:07.526607 containerd[1486]: time="2025-01-13T20:19:07.526587569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 20:19:07.527424 containerd[1486]: time="2025-01-13T20:19:07.527399407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:19:08.437726 containerd[1486]: time="2025-01-13T20:19:08.436684815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:08.438733 containerd[1486]: time="2025-01-13T20:19:08.438689053Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024222" Jan 13 20:19:08.440721 containerd[1486]: time="2025-01-13T20:19:08.440682890Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:08.444142 containerd[1486]: time="2025-01-13T20:19:08.444098766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:08.446291 containerd[1486]: time="2025-01-13T20:19:08.446243804Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 918.617878ms" Jan 13 20:19:08.446417 containerd[1486]: time="2025-01-13T20:19:08.446402803Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 20:19:08.447138 containerd[1486]: time="2025-01-13T20:19:08.447107723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:19:09.437566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513228998.mount: Deactivated successfully. Jan 13 20:19:09.781370 containerd[1486]: time="2025-01-13T20:19:09.779999063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:09.781871 containerd[1486]: time="2025-01-13T20:19:09.781815384Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771452" Jan 13 20:19:09.783729 containerd[1486]: time="2025-01-13T20:19:09.783683545Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:09.789037 containerd[1486]: time="2025-01-13T20:19:09.788945307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:09.789294 containerd[1486]: time="2025-01-13T20:19:09.789249747Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.342010425s" Jan 13 20:19:09.789365 containerd[1486]: time="2025-01-13T20:19:09.789351068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:19:09.790398 containerd[1486]: time="2025-01-13T20:19:09.789881588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:19:10.446119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801602910.mount: Deactivated successfully. Jan 13 20:19:11.069305 containerd[1486]: time="2025-01-13T20:19:11.067973397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:11.070878 containerd[1486]: time="2025-01-13T20:19:11.070827128Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 13 20:19:11.072491 containerd[1486]: time="2025-01-13T20:19:11.072410254Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:11.075827 containerd[1486]: time="2025-01-13T20:19:11.075775906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:11.077125 containerd[1486]: time="2025-01-13T20:19:11.077089791Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.287178003s" Jan 13 20:19:11.077212 containerd[1486]: time="2025-01-13T20:19:11.077198111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:19:11.077762 containerd[1486]: time="2025-01-13T20:19:11.077739953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:19:11.599786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273590237.mount: Deactivated successfully. Jan 13 20:19:11.610300 containerd[1486]: time="2025-01-13T20:19:11.609862095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:11.612685 containerd[1486]: time="2025-01-13T20:19:11.612391904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 13 20:19:11.615133 containerd[1486]: time="2025-01-13T20:19:11.615045154Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:11.618022 containerd[1486]: time="2025-01-13T20:19:11.617944804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:11.619296 containerd[1486]: time="2025-01-13T20:19:11.619032128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 541.183095ms" Jan 13 20:19:11.619296 containerd[1486]: time="2025-01-13T20:19:11.619077328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 20:19:11.620236 containerd[1486]: time="2025-01-13T20:19:11.620201532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:19:12.210367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157721204.mount: Deactivated successfully. Jan 13 20:19:13.642189 containerd[1486]: time="2025-01-13T20:19:13.642122084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:13.646714 containerd[1486]: time="2025-01-13T20:19:13.646546153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 13 20:19:13.649910 containerd[1486]: time="2025-01-13T20:19:13.648628527Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:13.652044 containerd[1486]: time="2025-01-13T20:19:13.652006990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:13.653176 containerd[1486]: time="2025-01-13T20:19:13.653132597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.032890345s" Jan 13 20:19:13.653176 containerd[1486]: time="2025-01-13T20:19:13.653172317Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 20:19:14.086220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 13 20:19:14.095845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:14.219556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:14.225179 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:14.270836 kubelet[2273]: E0113 20:19:14.270741 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:14.272741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:14.272861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:19.732935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:19.740638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:19.778989 systemd[1]: Reloading requested from client PID 2299 ('systemctl') (unit session-7.scope)... Jan 13 20:19:19.779142 systemd[1]: Reloading... Jan 13 20:19:19.892300 zram_generator::config[2335]: No configuration found. Jan 13 20:19:20.006079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:20.075586 systemd[1]: Reloading finished in 296 ms. Jan 13 20:19:20.142770 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:19:20.143176 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:19:20.143608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:20.150873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:20.277782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:20.283590 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:20.326455 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:20.326455 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:20.326455 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:20.326876 kubelet[2387]: I0113 20:19:20.326650 2387 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:20.804889 kubelet[2387]: I0113 20:19:20.804832 2387 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:19:20.804889 kubelet[2387]: I0113 20:19:20.804870 2387 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:20.805187 kubelet[2387]: I0113 20:19:20.805155 2387 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:19:20.837790 kubelet[2387]: E0113 20:19:20.837738 2387 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.153.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:20.838759 kubelet[2387]: I0113 20:19:20.838551 2387 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:20.848556 kubelet[2387]: E0113 20:19:20.848440 2387 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:19:20.848906 kubelet[2387]: I0113 20:19:20.848717 2387 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:19:20.852767 kubelet[2387]: I0113 20:19:20.852729 2387 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:20.853349 kubelet[2387]: I0113 20:19:20.853183 2387 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:19:20.854281 kubelet[2387]: I0113 20:19:20.853459 2387 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:20.854281 kubelet[2387]: I0113 20:19:20.853514 2387 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-7-7ab547e2a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:19:20.854281 kubelet[2387]: I0113 20:19:20.853737 2387 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:20.854281 kubelet[2387]: I0113 20:19:20.853748 2387 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:19:20.854520 kubelet[2387]: I0113 20:19:20.853947 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:20.857219 kubelet[2387]: I0113 20:19:20.857185 2387 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:19:20.857414 kubelet[2387]: I0113 20:19:20.857399 2387 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:20.858143 kubelet[2387]: I0113 20:19:20.858128 2387 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:19:20.858228 kubelet[2387]: I0113 20:19:20.858217 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:20.863998 kubelet[2387]: W0113 20:19:20.863931 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-7ab547e2a5&limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:20.864592 kubelet[2387]: E0113 20:19:20.864200 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.153.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-7ab547e2a5&limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:20.864592 kubelet[2387]: I0113 20:19:20.864341 2387 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:20.866418 kubelet[2387]: I0113 20:19:20.866395 2387 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:20.867439 kubelet[2387]: W0113 20:19:20.867418 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:19:20.869756 kubelet[2387]: I0113 20:19:20.869727 2387 server.go:1269] "Started kubelet" Jan 13 20:19:20.871153 kubelet[2387]: W0113 20:19:20.870678 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:20.871153 kubelet[2387]: E0113 20:19:20.870744 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.153.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:20.871153 kubelet[2387]: I0113 20:19:20.870836 2387 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:20.876613 kubelet[2387]: I0113 20:19:20.876553 2387 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:19:20.877685 kubelet[2387]: I0113 20:19:20.872670 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:20.877961 kubelet[2387]: I0113 20:19:20.877933 2387 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:20.879914 kubelet[2387]: E0113 20:19:20.877889 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.210:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-7-7ab547e2a5.181a59ffa7b15a5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-7-7ab547e2a5,UID:ci-4186-1-0-7-7ab547e2a5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-7-7ab547e2a5,},FirstTimestamp:2025-01-13 20:19:20.869698141 +0000 UTC m=+0.582729925,LastTimestamp:2025-01-13 20:19:20.869698141 +0000 UTC m=+0.582729925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-7-7ab547e2a5,}" Jan 13 20:19:20.879914 kubelet[2387]: I0113 20:19:20.879710 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:20.881239 kubelet[2387]: I0113 20:19:20.881215 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:19:20.882755 kubelet[2387]: I0113 20:19:20.882726 2387 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:19:20.883124 kubelet[2387]: E0113 20:19:20.883094 2387 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-7-7ab547e2a5\" not found" Jan 13 20:19:20.885160 kubelet[2387]: E0113 20:19:20.885071 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-7ab547e2a5?timeout=10s\": dial tcp 138.199.153.210:6443: connect: connection refused" interval="200ms" Jan 13 20:19:20.885889 kubelet[2387]: I0113 20:19:20.885608 2387 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:20.885889 kubelet[2387]: I0113 20:19:20.885728 2387 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:20.889292 kubelet[2387]: W0113 20:19:20.888606 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:20.889292 kubelet[2387]: E0113 20:19:20.888666 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.153.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:20.889292 kubelet[2387]: I0113 20:19:20.888754 2387 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:20.889292 kubelet[2387]: I0113 20:19:20.888775 2387 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:19:20.889806 kubelet[2387]: I0113 20:19:20.889768 2387 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:20.889891 kubelet[2387]: E0113 20:19:20.889789 2387 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:20.899592 kubelet[2387]: I0113 20:19:20.899545 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:20.900946 kubelet[2387]: I0113 20:19:20.900919 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:20.901077 kubelet[2387]: I0113 20:19:20.901066 2387 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:20.901138 kubelet[2387]: I0113 20:19:20.901129 2387 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:19:20.901284 kubelet[2387]: E0113 20:19:20.901232 2387 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:20.911165 kubelet[2387]: W0113 20:19:20.911002 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:20.912184 kubelet[2387]: E0113 20:19:20.911814 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.153.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:20.919903 kubelet[2387]: I0113 20:19:20.919583 2387 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:20.919903 kubelet[2387]: I0113 20:19:20.919605 2387 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:20.919903 kubelet[2387]: I0113 20:19:20.919627 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:20.922691 kubelet[2387]: I0113 20:19:20.922644 2387 policy_none.go:49] "None policy: Start" Jan 13 20:19:20.924971 kubelet[2387]: I0113 20:19:20.924354 2387 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:20.924971 kubelet[2387]: I0113 20:19:20.924405 2387 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:20.935863 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:19:20.946879 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:19:20.952580 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:19:20.965442 kubelet[2387]: I0113 20:19:20.964644 2387 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:20.965442 kubelet[2387]: I0113 20:19:20.964905 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:19:20.965442 kubelet[2387]: I0113 20:19:20.964920 2387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:20.965442 kubelet[2387]: I0113 20:19:20.965242 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:20.969123 kubelet[2387]: E0113 20:19:20.968808 2387 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-7-7ab547e2a5\" not found" Jan 13 20:19:21.017368 systemd[1]: Created slice kubepods-burstable-pod87dd1bdd6daa7f558c4bdb5834a96741.slice - libcontainer container kubepods-burstable-pod87dd1bdd6daa7f558c4bdb5834a96741.slice. Jan 13 20:19:21.037706 systemd[1]: Created slice kubepods-burstable-pod67931252ad9d009af4e61d3821a64e58.slice - libcontainer container kubepods-burstable-pod67931252ad9d009af4e61d3821a64e58.slice. Jan 13 20:19:21.050892 systemd[1]: Created slice kubepods-burstable-pod3fc138c51a58149825e8152920567bff.slice - libcontainer container kubepods-burstable-pod3fc138c51a58149825e8152920567bff.slice. Jan 13 20:19:21.068325 kubelet[2387]: I0113 20:19:21.067868 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.069347 kubelet[2387]: E0113 20:19:21.069232 2387 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.210:6443/api/v1/nodes\": dial tcp 138.199.153.210:6443: connect: connection refused" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.085960 kubelet[2387]: E0113 20:19:21.085869 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-7ab547e2a5?timeout=10s\": dial tcp 138.199.153.210:6443: connect: connection refused" interval="400ms" Jan 13 20:19:21.089664 kubelet[2387]: I0113 20:19:21.089543 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190387 kubelet[2387]: I0113 20:19:21.190134 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190387 kubelet[2387]: I0113 20:19:21.190228 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190387 kubelet[2387]: I0113 20:19:21.190280 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190387 kubelet[2387]: I0113 20:19:21.190321 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fc138c51a58149825e8152920567bff-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" (UID: \"3fc138c51a58149825e8152920567bff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190387 kubelet[2387]: I0113 20:19:21.190349 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fc138c51a58149825e8152920567bff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" (UID: \"3fc138c51a58149825e8152920567bff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190865 kubelet[2387]: I0113 20:19:21.190381 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190865 kubelet[2387]: I0113 20:19:21.190410 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67931252ad9d009af4e61d3821a64e58-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-7-7ab547e2a5\" (UID: \"67931252ad9d009af4e61d3821a64e58\") " pod="kube-system/kube-scheduler-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.190865 kubelet[2387]: I0113 20:19:21.190434 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fc138c51a58149825e8152920567bff-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" (UID: \"3fc138c51a58149825e8152920567bff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.272795 kubelet[2387]: I0113 20:19:21.272690 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.273406 kubelet[2387]: E0113 20:19:21.273352 2387 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.210:6443/api/v1/nodes\": dial tcp 138.199.153.210:6443: connect: connection refused" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.334551 containerd[1486]: time="2025-01-13T20:19:21.334325662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-7-7ab547e2a5,Uid:87dd1bdd6daa7f558c4bdb5834a96741,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:21.350069 containerd[1486]: time="2025-01-13T20:19:21.349336596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-7-7ab547e2a5,Uid:67931252ad9d009af4e61d3821a64e58,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:21.355150 containerd[1486]: time="2025-01-13T20:19:21.354852889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-7-7ab547e2a5,Uid:3fc138c51a58149825e8152920567bff,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:21.487736 kubelet[2387]: E0113 20:19:21.487686 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-7ab547e2a5?timeout=10s\": dial tcp 138.199.153.210:6443: connect: connection refused" interval="800ms" Jan 13 20:19:21.676714 kubelet[2387]: I0113 20:19:21.675697 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.676714 kubelet[2387]: E0113 20:19:21.676299 2387 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.210:6443/api/v1/nodes\": dial tcp 138.199.153.210:6443: connect: connection refused" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:21.711414 kubelet[2387]: W0113 20:19:21.711352 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:21.711414 kubelet[2387]: E0113 20:19:21.711414 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.153.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:21.818876 kubelet[2387]: W0113 20:19:21.818664 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:21.818876 kubelet[2387]: E0113 20:19:21.818806 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.153.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:21.856876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678520905.mount: Deactivated successfully. Jan 13 20:19:21.865469 containerd[1486]: time="2025-01-13T20:19:21.864964172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.869703 containerd[1486]: time="2025-01-13T20:19:21.869514489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:19:21.872824 containerd[1486]: time="2025-01-13T20:19:21.871978691Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.874330 containerd[1486]: time="2025-01-13T20:19:21.873970405Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.878745 containerd[1486]: time="2025-01-13T20:19:21.876864414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:21.878745 containerd[1486]: time="2025-01-13T20:19:21.878353679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:21.878745 containerd[1486]: time="2025-01-13T20:19:21.878496961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.881058 containerd[1486]: time="2025-01-13T20:19:21.881013324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.588047ms" Jan 13 20:19:21.881787 containerd[1486]: time="2025-01-13T20:19:21.881745896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.883197 containerd[1486]: time="2025-01-13T20:19:21.883150200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.742657ms" Jan 13 20:19:21.887415 containerd[1486]: time="2025-01-13T20:19:21.887364351Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.42866ms" Jan 13 20:19:21.953546 kubelet[2387]: W0113 20:19:21.953386 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-7ab547e2a5&limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:21.954169 kubelet[2387]: E0113 20:19:21.954142 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.153.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-7ab547e2a5&limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:21.999714 containerd[1486]: time="2025-01-13T20:19:21.998997243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:21.999714 containerd[1486]: time="2025-01-13T20:19:21.999526372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:21.999714 containerd[1486]: time="2025-01-13T20:19:21.999546372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:21.999714 containerd[1486]: time="2025-01-13T20:19:21.999637774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.004402 containerd[1486]: time="2025-01-13T20:19:22.001635968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:22.004402 containerd[1486]: time="2025-01-13T20:19:22.001693289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:22.004402 containerd[1486]: time="2025-01-13T20:19:22.001703689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.004402 containerd[1486]: time="2025-01-13T20:19:22.001781731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.007608 containerd[1486]: time="2025-01-13T20:19:22.007461313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:22.008539 containerd[1486]: time="2025-01-13T20:19:22.008466611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:22.008811 containerd[1486]: time="2025-01-13T20:19:22.008772297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.009197 containerd[1486]: time="2025-01-13T20:19:22.009159984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.028492 systemd[1]: Started cri-containerd-b37458d0ee01047632b4dda6f9f05ed7d4b60c880720fc687226ce93a872d43b.scope - libcontainer container b37458d0ee01047632b4dda6f9f05ed7d4b60c880720fc687226ce93a872d43b. Jan 13 20:19:22.048897 systemd[1]: Started cri-containerd-012fa3c28932b3ad7b4c03769e958267b0f06a10ebf5b8eff8541a35af275026.scope - libcontainer container 012fa3c28932b3ad7b4c03769e958267b0f06a10ebf5b8eff8541a35af275026. Jan 13 20:19:22.068392 systemd[1]: Started cri-containerd-a1915b3d1afd547cfad5359411287279835fa138c2bd5696feaef969af16803b.scope - libcontainer container a1915b3d1afd547cfad5359411287279835fa138c2bd5696feaef969af16803b. Jan 13 20:19:22.083446 kubelet[2387]: W0113 20:19:22.083336 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.210:6443: connect: connection refused Jan 13 20:19:22.083446 kubelet[2387]: E0113 20:19:22.083396 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.153.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.153.210:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:22.127204 containerd[1486]: time="2025-01-13T20:19:22.127061432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-7-7ab547e2a5,Uid:67931252ad9d009af4e61d3821a64e58,Namespace:kube-system,Attempt:0,} returns sandbox id \"b37458d0ee01047632b4dda6f9f05ed7d4b60c880720fc687226ce93a872d43b\"" Jan 13 20:19:22.129843 containerd[1486]: time="2025-01-13T20:19:22.129512957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-7-7ab547e2a5,Uid:3fc138c51a58149825e8152920567bff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1915b3d1afd547cfad5359411287279835fa138c2bd5696feaef969af16803b\"" Jan 13 20:19:22.134712 containerd[1486]: time="2025-01-13T20:19:22.134505607Z" level=info msg="CreateContainer within sandbox \"b37458d0ee01047632b4dda6f9f05ed7d4b60c880720fc687226ce93a872d43b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:19:22.135608 containerd[1486]: time="2025-01-13T20:19:22.135368182Z" level=info msg="CreateContainer within sandbox \"a1915b3d1afd547cfad5359411287279835fa138c2bd5696feaef969af16803b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:19:22.140096 containerd[1486]: time="2025-01-13T20:19:22.140011866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-7-7ab547e2a5,Uid:87dd1bdd6daa7f558c4bdb5834a96741,Namespace:kube-system,Attempt:0,} returns sandbox id \"012fa3c28932b3ad7b4c03769e958267b0f06a10ebf5b8eff8541a35af275026\"" Jan 13 20:19:22.143860 containerd[1486]: time="2025-01-13T20:19:22.143711733Z" level=info msg="CreateContainer within sandbox \"012fa3c28932b3ad7b4c03769e958267b0f06a10ebf5b8eff8541a35af275026\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:19:22.170434 containerd[1486]: time="2025-01-13T20:19:22.170382854Z" level=info msg="CreateContainer within sandbox \"b37458d0ee01047632b4dda6f9f05ed7d4b60c880720fc687226ce93a872d43b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96fb8c28e831f24fb0186e624c8ffa8a414e4bda34812deca71d6cba056d8441\"" Jan 13 20:19:22.171808 containerd[1486]: time="2025-01-13T20:19:22.171753079Z" level=info msg="StartContainer for \"96fb8c28e831f24fb0186e624c8ffa8a414e4bda34812deca71d6cba056d8441\"" Jan 13 20:19:22.172802 containerd[1486]: time="2025-01-13T20:19:22.172592574Z" level=info msg="CreateContainer within sandbox \"012fa3c28932b3ad7b4c03769e958267b0f06a10ebf5b8eff8541a35af275026\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8df9b237a490d554492638c51e7181f822d7bd85f17c998b7c0c9f2f5caa0ba9\"" Jan 13 20:19:22.173689 containerd[1486]: time="2025-01-13T20:19:22.173358548Z" level=info msg="CreateContainer within sandbox \"a1915b3d1afd547cfad5359411287279835fa138c2bd5696feaef969af16803b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99ce12e402c594c4026cfe6121e8ddecdd8af71dc8675249f60dcc0a4d064cd5\"" Jan 13 20:19:22.174083 containerd[1486]: time="2025-01-13T20:19:22.174058841Z" level=info msg="StartContainer for \"8df9b237a490d554492638c51e7181f822d7bd85f17c998b7c0c9f2f5caa0ba9\"" Jan 13 20:19:22.175919 containerd[1486]: time="2025-01-13T20:19:22.175873354Z" level=info msg="StartContainer for \"99ce12e402c594c4026cfe6121e8ddecdd8af71dc8675249f60dcc0a4d064cd5\"" Jan 13 20:19:22.213793 systemd[1]: Started cri-containerd-96fb8c28e831f24fb0186e624c8ffa8a414e4bda34812deca71d6cba056d8441.scope - libcontainer container 96fb8c28e831f24fb0186e624c8ffa8a414e4bda34812deca71d6cba056d8441. Jan 13 20:19:22.224569 systemd[1]: Started cri-containerd-8df9b237a490d554492638c51e7181f822d7bd85f17c998b7c0c9f2f5caa0ba9.scope - libcontainer container 8df9b237a490d554492638c51e7181f822d7bd85f17c998b7c0c9f2f5caa0ba9. Jan 13 20:19:22.227246 systemd[1]: Started cri-containerd-99ce12e402c594c4026cfe6121e8ddecdd8af71dc8675249f60dcc0a4d064cd5.scope - libcontainer container 99ce12e402c594c4026cfe6121e8ddecdd8af71dc8675249f60dcc0a4d064cd5. Jan 13 20:19:22.287276 containerd[1486]: time="2025-01-13T20:19:22.285384331Z" level=info msg="StartContainer for \"99ce12e402c594c4026cfe6121e8ddecdd8af71dc8675249f60dcc0a4d064cd5\" returns successfully" Jan 13 20:19:22.287276 containerd[1486]: time="2025-01-13T20:19:22.285414851Z" level=info msg="StartContainer for \"96fb8c28e831f24fb0186e624c8ffa8a414e4bda34812deca71d6cba056d8441\" returns successfully" Jan 13 20:19:22.289176 kubelet[2387]: E0113 20:19:22.289075 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-7ab547e2a5?timeout=10s\": dial tcp 138.199.153.210:6443: connect: connection refused" interval="1.6s" Jan 13 20:19:22.294355 containerd[1486]: time="2025-01-13T20:19:22.293655400Z" level=info msg="StartContainer for \"8df9b237a490d554492638c51e7181f822d7bd85f17c998b7c0c9f2f5caa0ba9\" returns successfully" Jan 13 20:19:22.310707 kubelet[2387]: E0113 20:19:22.310579 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.210:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-7-7ab547e2a5.181a59ffa7b15a5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-7-7ab547e2a5,UID:ci-4186-1-0-7-7ab547e2a5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-7-7ab547e2a5,},FirstTimestamp:2025-01-13 20:19:20.869698141 +0000 UTC m=+0.582729925,LastTimestamp:2025-01-13 20:19:20.869698141 +0000 UTC m=+0.582729925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-7-7ab547e2a5,}" Jan 13 20:19:22.478719 kubelet[2387]: I0113 20:19:22.478315 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:25.062680 kubelet[2387]: E0113 20:19:25.062626 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-7-7ab547e2a5\" not found" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:25.175072 kubelet[2387]: I0113 20:19:25.173344 2387 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:25.876999 kubelet[2387]: I0113 20:19:25.876429 2387 apiserver.go:52] "Watching apiserver" Jan 13 20:19:25.889277 kubelet[2387]: I0113 20:19:25.888971 2387 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:19:27.236309 systemd[1]: Reloading requested from client PID 2661 ('systemctl') (unit session-7.scope)... Jan 13 20:19:27.236338 systemd[1]: Reloading... Jan 13 20:19:27.348296 zram_generator::config[2707]: No configuration found. Jan 13 20:19:27.436499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:27.519673 systemd[1]: Reloading finished in 282 ms. Jan 13 20:19:27.564781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:27.579079 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:19:27.579702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:27.579887 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 116.9M memory peak, 0B memory swap peak. Jan 13 20:19:27.586599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:27.710516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:27.718777 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:27.780912 kubelet[2746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:27.780912 kubelet[2746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:27.780912 kubelet[2746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:27.780912 kubelet[2746]: I0113 20:19:27.779020 2746 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:27.790472 kubelet[2746]: I0113 20:19:27.790432 2746 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:19:27.790472 kubelet[2746]: I0113 20:19:27.790465 2746 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:27.790736 kubelet[2746]: I0113 20:19:27.790718 2746 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:19:27.793199 kubelet[2746]: I0113 20:19:27.793166 2746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:19:27.795833 kubelet[2746]: I0113 20:19:27.795518 2746 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:27.799627 kubelet[2746]: E0113 20:19:27.799589 2746 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:19:27.799873 kubelet[2746]: I0113 20:19:27.799856 2746 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:19:27.802141 kubelet[2746]: I0113 20:19:27.802109 2746 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:27.802494 kubelet[2746]: I0113 20:19:27.802406 2746 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:19:27.803112 kubelet[2746]: I0113 20:19:27.802659 2746 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:27.803112 kubelet[2746]: I0113 20:19:27.802693 2746 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-7-7ab547e2a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:19:27.803112 kubelet[2746]: I0113 20:19:27.802874 2746 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:27.803112 kubelet[2746]: I0113 20:19:27.802886 2746 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:19:27.803370 kubelet[2746]: I0113 20:19:27.802917 2746 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:27.803370 kubelet[2746]: I0113 20:19:27.803023 2746 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:19:27.803370 kubelet[2746]: I0113 20:19:27.803033 2746 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:27.803370 kubelet[2746]: I0113 20:19:27.803051 2746 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:19:27.803370 kubelet[2746]: I0113 20:19:27.803060 2746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:27.811823 kubelet[2746]: I0113 20:19:27.811780 2746 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:27.812521 kubelet[2746]: I0113 20:19:27.812488 2746 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:27.813609 kubelet[2746]: I0113 20:19:27.813083 2746 server.go:1269] "Started kubelet" Jan 13 20:19:27.819519 kubelet[2746]: I0113 20:19:27.819468 2746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:27.824087 kubelet[2746]: I0113 20:19:27.824041 2746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:27.827467 kubelet[2746]: I0113 20:19:27.826797 2746 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:19:27.829540 kubelet[2746]: I0113 20:19:27.829480 2746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:27.829862 kubelet[2746]: I0113 20:19:27.829832 2746 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:27.832388 kubelet[2746]: I0113 20:19:27.832188 2746 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:19:27.837940 kubelet[2746]: I0113 20:19:27.837884 2746 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:19:27.838213 kubelet[2746]: E0113 20:19:27.838184 2746 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-7-7ab547e2a5\" not found" Jan 13 20:19:27.840761 kubelet[2746]: I0113 20:19:27.840728 2746 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:19:27.841894 kubelet[2746]: I0113 20:19:27.840876 2746 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:27.848459 kubelet[2746]: I0113 20:19:27.848423 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:27.850602 kubelet[2746]: I0113 20:19:27.850154 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:27.850602 kubelet[2746]: I0113 20:19:27.850182 2746 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:27.850602 kubelet[2746]: I0113 20:19:27.850199 2746 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:19:27.850958 kubelet[2746]: E0113 20:19:27.850243 2746 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:27.851184 kubelet[2746]: I0113 20:19:27.851154 2746 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:27.851184 kubelet[2746]: I0113 20:19:27.851180 2746 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:27.851865 kubelet[2746]: I0113 20:19:27.851648 2746 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:27.918798 kubelet[2746]: I0113 20:19:27.918773 2746 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:27.919318 kubelet[2746]: I0113 20:19:27.918963 2746 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:27.919318 kubelet[2746]: I0113 20:19:27.918991 2746 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:27.919318 kubelet[2746]: I0113 20:19:27.919162 2746 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:19:27.919318 kubelet[2746]: I0113 20:19:27.919174 2746 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:19:27.919318 kubelet[2746]: I0113 20:19:27.919194 2746 policy_none.go:49] "None policy: Start" Jan 13 20:19:27.920992 kubelet[2746]: I0113 20:19:27.920610 2746 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:27.920992 kubelet[2746]: I0113 20:19:27.920642 2746 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:27.920992 kubelet[2746]: I0113 20:19:27.920891 2746 state_mem.go:75] "Updated machine memory state" Jan 13 20:19:27.926942 kubelet[2746]: I0113 20:19:27.926910 2746 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:27.927870 kubelet[2746]: I0113 20:19:27.927817 2746 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:19:27.928373 kubelet[2746]: I0113 20:19:27.928005 2746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:27.928867 kubelet[2746]: I0113 20:19:27.928851 2746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:28.031513 kubelet[2746]: I0113 20:19:28.031390 2746 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042608 kubelet[2746]: I0113 20:19:28.042518 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fc138c51a58149825e8152920567bff-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" (UID: \"3fc138c51a58149825e8152920567bff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042785 kubelet[2746]: I0113 20:19:28.042632 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042785 kubelet[2746]: I0113 20:19:28.042681 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67931252ad9d009af4e61d3821a64e58-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-7-7ab547e2a5\" (UID: \"67931252ad9d009af4e61d3821a64e58\") " pod="kube-system/kube-scheduler-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042785 kubelet[2746]: I0113 20:19:28.042716 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fc138c51a58149825e8152920567bff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" (UID: \"3fc138c51a58149825e8152920567bff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042785 kubelet[2746]: I0113 20:19:28.042749 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042785 kubelet[2746]: I0113 20:19:28.042777 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042942 kubelet[2746]: I0113 20:19:28.042804 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042942 kubelet[2746]: I0113 20:19:28.042834 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87dd1bdd6daa7f558c4bdb5834a96741-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-7-7ab547e2a5\" (UID: \"87dd1bdd6daa7f558c4bdb5834a96741\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.042942 kubelet[2746]: I0113 20:19:28.042862 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fc138c51a58149825e8152920567bff-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" (UID: \"3fc138c51a58149825e8152920567bff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.045637 kubelet[2746]: I0113 20:19:28.045593 2746 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.045712 kubelet[2746]: I0113 20:19:28.045683 2746 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.235483 sudo[2777]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:19:28.235816 sudo[2777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:19:28.735540 sudo[2777]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:28.804506 kubelet[2746]: I0113 20:19:28.804399 2746 apiserver.go:52] "Watching apiserver" Jan 13 20:19:28.841415 kubelet[2746]: I0113 20:19:28.841372 2746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:19:28.908986 kubelet[2746]: E0113 20:19:28.908937 2746 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-7-7ab547e2a5\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" Jan 13 20:19:28.932582 kubelet[2746]: I0113 20:19:28.932392 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-7-7ab547e2a5" podStartSLOduration=1.93237391 podStartE2EDuration="1.93237391s" podCreationTimestamp="2025-01-13 20:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:28.930238178 +0000 UTC m=+1.204323284" watchObservedRunningTime="2025-01-13 20:19:28.93237391 +0000 UTC m=+1.206459056" Jan 13 20:19:28.944646 kubelet[2746]: I0113 20:19:28.944588 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-7-7ab547e2a5" podStartSLOduration=1.9445592029999998 podStartE2EDuration="1.944559203s" podCreationTimestamp="2025-01-13 20:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:28.943363134 +0000 UTC m=+1.217448200" watchObservedRunningTime="2025-01-13 20:19:28.944559203 +0000 UTC m=+1.218644309" Jan 13 20:19:28.971332 kubelet[2746]: I0113 20:19:28.971120 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-7-7ab547e2a5" podStartSLOduration=1.97110108 podStartE2EDuration="1.97110108s" podCreationTimestamp="2025-01-13 20:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:28.958984989 +0000 UTC m=+1.233070135" watchObservedRunningTime="2025-01-13 20:19:28.97110108 +0000 UTC m=+1.245186146" Jan 13 20:19:31.016119 sudo[1864]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:31.177167 sshd[1863]: Connection closed by 139.178.89.65 port 32770 Jan 13 20:19:31.177055 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Jan 13 20:19:31.182501 systemd[1]: sshd@6-138.199.153.210:22-139.178.89.65:32770.service: Deactivated successfully. Jan 13 20:19:31.185747 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:19:31.186036 systemd[1]: session-7.scope: Consumed 8.447s CPU time, 151.0M memory peak, 0B memory swap peak. Jan 13 20:19:31.187735 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:19:31.189671 systemd-logind[1466]: Removed session 7. Jan 13 20:19:33.159104 kubelet[2746]: I0113 20:19:33.159028 2746 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:19:33.160098 containerd[1486]: time="2025-01-13T20:19:33.160061400Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:19:33.160563 kubelet[2746]: I0113 20:19:33.160323 2746 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:19:34.181789 kubelet[2746]: I0113 20:19:34.181731 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-cgroup\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.185824 kubelet[2746]: I0113 20:19:34.182419 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49nj\" (UniqueName: \"kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-kube-api-access-t49nj\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.185824 kubelet[2746]: I0113 20:19:34.184218 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e349a53e-ee62-4429-a66c-4b189301dacb-kube-proxy\") pod \"kube-proxy-xbnxj\" (UID: \"e349a53e-ee62-4429-a66c-4b189301dacb\") " pod="kube-system/kube-proxy-xbnxj" Jan 13 20:19:34.185824 kubelet[2746]: I0113 20:19:34.184271 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e349a53e-ee62-4429-a66c-4b189301dacb-xtables-lock\") pod \"kube-proxy-xbnxj\" (UID: \"e349a53e-ee62-4429-a66c-4b189301dacb\") " pod="kube-system/kube-proxy-xbnxj" Jan 13 20:19:34.185824 kubelet[2746]: I0113 20:19:34.184299 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ht9w\" (UniqueName: \"kubernetes.io/projected/e349a53e-ee62-4429-a66c-4b189301dacb-kube-api-access-8ht9w\") pod \"kube-proxy-xbnxj\" (UID: \"e349a53e-ee62-4429-a66c-4b189301dacb\") " pod="kube-system/kube-proxy-xbnxj" Jan 13 20:19:34.185824 kubelet[2746]: I0113 20:19:34.184333 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-bpf-maps\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186047 kubelet[2746]: I0113 20:19:34.184351 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-net\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186047 kubelet[2746]: I0113 20:19:34.184368 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-kernel\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186047 kubelet[2746]: I0113 20:19:34.184397 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-xtables-lock\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186047 kubelet[2746]: I0113 20:19:34.184418 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hostproc\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186047 kubelet[2746]: I0113 20:19:34.184822 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cni-path\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186047 kubelet[2746]: I0113 20:19:34.185784 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-lib-modules\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186653 kubelet[2746]: I0113 20:19:34.186232 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-clustermesh-secrets\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186653 kubelet[2746]: I0113 20:19:34.186303 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-run\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186653 kubelet[2746]: I0113 20:19:34.186326 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-etc-cni-netd\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.186653 kubelet[2746]: I0113 20:19:34.186342 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-config-path\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.187236 kubelet[2746]: I0113 20:19:34.186904 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hubble-tls\") pod \"cilium-clvcm\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " pod="kube-system/cilium-clvcm" Jan 13 20:19:34.187236 kubelet[2746]: I0113 20:19:34.187030 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e349a53e-ee62-4429-a66c-4b189301dacb-lib-modules\") pod \"kube-proxy-xbnxj\" (UID: \"e349a53e-ee62-4429-a66c-4b189301dacb\") " pod="kube-system/kube-proxy-xbnxj" Jan 13 20:19:34.195196 systemd[1]: Created slice kubepods-burstable-pod1f4f1b2b_b677_4d4c_89b8_e7a095a1db67.slice - libcontainer container kubepods-burstable-pod1f4f1b2b_b677_4d4c_89b8_e7a095a1db67.slice. Jan 13 20:19:34.206206 systemd[1]: Created slice kubepods-besteffort-pode349a53e_ee62_4429_a66c_4b189301dacb.slice - libcontainer container kubepods-besteffort-pode349a53e_ee62_4429_a66c_4b189301dacb.slice. Jan 13 20:19:34.367681 systemd[1]: Created slice kubepods-besteffort-pod35b59716_ba8a_47a0_ae95_e79ffe08df12.slice - libcontainer container kubepods-besteffort-pod35b59716_ba8a_47a0_ae95_e79ffe08df12.slice. Jan 13 20:19:34.387874 kubelet[2746]: I0113 20:19:34.387830 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35b59716-ba8a-47a0-ae95-e79ffe08df12-cilium-config-path\") pod \"cilium-operator-5d85765b45-nfvkf\" (UID: \"35b59716-ba8a-47a0-ae95-e79ffe08df12\") " pod="kube-system/cilium-operator-5d85765b45-nfvkf" Jan 13 20:19:34.388052 kubelet[2746]: I0113 20:19:34.388026 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc6n5\" (UniqueName: \"kubernetes.io/projected/35b59716-ba8a-47a0-ae95-e79ffe08df12-kube-api-access-bc6n5\") pod \"cilium-operator-5d85765b45-nfvkf\" (UID: \"35b59716-ba8a-47a0-ae95-e79ffe08df12\") " pod="kube-system/cilium-operator-5d85765b45-nfvkf" Jan 13 20:19:34.507205 containerd[1486]: time="2025-01-13T20:19:34.506512251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clvcm,Uid:1f4f1b2b-b677-4d4c-89b8-e7a095a1db67,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:34.518387 containerd[1486]: time="2025-01-13T20:19:34.518344234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xbnxj,Uid:e349a53e-ee62-4429-a66c-4b189301dacb,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:34.538485 containerd[1486]: time="2025-01-13T20:19:34.538215849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:34.538485 containerd[1486]: time="2025-01-13T20:19:34.538319772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:34.538485 containerd[1486]: time="2025-01-13T20:19:34.538341453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:34.538485 containerd[1486]: time="2025-01-13T20:19:34.538433096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:34.558075 containerd[1486]: time="2025-01-13T20:19:34.557625852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:34.558075 containerd[1486]: time="2025-01-13T20:19:34.557703054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:34.558302 containerd[1486]: time="2025-01-13T20:19:34.558013263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:34.558914 containerd[1486]: time="2025-01-13T20:19:34.558778325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:34.559478 systemd[1]: Started cri-containerd-2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3.scope - libcontainer container 2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3. Jan 13 20:19:34.584553 systemd[1]: Started cri-containerd-ea57676748200b0d67e2178549fc8ffe13f522f6a411453b103901780b1d4936.scope - libcontainer container ea57676748200b0d67e2178549fc8ffe13f522f6a411453b103901780b1d4936. Jan 13 20:19:34.605949 containerd[1486]: time="2025-01-13T20:19:34.605484038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clvcm,Uid:1f4f1b2b-b677-4d4c-89b8-e7a095a1db67,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\"" Jan 13 20:19:34.611389 containerd[1486]: time="2025-01-13T20:19:34.610297098Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:19:34.635976 containerd[1486]: time="2025-01-13T20:19:34.635783836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xbnxj,Uid:e349a53e-ee62-4429-a66c-4b189301dacb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea57676748200b0d67e2178549fc8ffe13f522f6a411453b103901780b1d4936\"" Jan 13 20:19:34.640270 containerd[1486]: time="2025-01-13T20:19:34.640126002Z" level=info msg="CreateContainer within sandbox \"ea57676748200b0d67e2178549fc8ffe13f522f6a411453b103901780b1d4936\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:19:34.658948 containerd[1486]: time="2025-01-13T20:19:34.658894066Z" level=info msg="CreateContainer within sandbox \"ea57676748200b0d67e2178549fc8ffe13f522f6a411453b103901780b1d4936\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4018bc72756842159199ce2d4b265573f89ccea5bab4ca9ed0cbbf8dc2e8cb66\"" Jan 13 20:19:34.661337 containerd[1486]: time="2025-01-13T20:19:34.661244454Z" level=info msg="StartContainer for \"4018bc72756842159199ce2d4b265573f89ccea5bab4ca9ed0cbbf8dc2e8cb66\"" Jan 13 20:19:34.673875 containerd[1486]: time="2025-01-13T20:19:34.673755096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nfvkf,Uid:35b59716-ba8a-47a0-ae95-e79ffe08df12,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:34.703690 containerd[1486]: time="2025-01-13T20:19:34.703567920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:34.703690 containerd[1486]: time="2025-01-13T20:19:34.703634842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:34.703690 containerd[1486]: time="2025-01-13T20:19:34.703646882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:34.704459 containerd[1486]: time="2025-01-13T20:19:34.704005893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:34.705886 systemd[1]: Started cri-containerd-4018bc72756842159199ce2d4b265573f89ccea5bab4ca9ed0cbbf8dc2e8cb66.scope - libcontainer container 4018bc72756842159199ce2d4b265573f89ccea5bab4ca9ed0cbbf8dc2e8cb66. Jan 13 20:19:34.730664 systemd[1]: Started cri-containerd-592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143.scope - libcontainer container 592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143. Jan 13 20:19:34.762727 containerd[1486]: time="2025-01-13T20:19:34.762593110Z" level=info msg="StartContainer for \"4018bc72756842159199ce2d4b265573f89ccea5bab4ca9ed0cbbf8dc2e8cb66\" returns successfully" Jan 13 20:19:34.800915 containerd[1486]: time="2025-01-13T20:19:34.799988914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nfvkf,Uid:35b59716-ba8a-47a0-ae95-e79ffe08df12,Namespace:kube-system,Attempt:0,} returns sandbox id \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\"" Jan 13 20:19:34.937028 kubelet[2746]: I0113 20:19:34.936658 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xbnxj" podStartSLOduration=0.936639713 podStartE2EDuration="936.639713ms" podCreationTimestamp="2025-01-13 20:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:34.936052576 +0000 UTC m=+7.210137722" watchObservedRunningTime="2025-01-13 20:19:34.936639713 +0000 UTC m=+7.210724819" Jan 13 20:19:38.526657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316180890.mount: Deactivated successfully. Jan 13 20:19:43.229525 containerd[1486]: time="2025-01-13T20:19:43.229457158Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:43.231637 containerd[1486]: time="2025-01-13T20:19:43.231549911Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650898" Jan 13 20:19:43.232081 containerd[1486]: time="2025-01-13T20:19:43.232036288Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:43.236989 containerd[1486]: time="2025-01-13T20:19:43.236744572Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.626397272s" Jan 13 20:19:43.236989 containerd[1486]: time="2025-01-13T20:19:43.236799134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:19:43.243776 containerd[1486]: time="2025-01-13T20:19:43.240658508Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:19:43.243776 containerd[1486]: time="2025-01-13T20:19:43.242237603Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:19:43.262574 containerd[1486]: time="2025-01-13T20:19:43.262530430Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\"" Jan 13 20:19:43.263561 containerd[1486]: time="2025-01-13T20:19:43.263512224Z" level=info msg="StartContainer for \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\"" Jan 13 20:19:43.296526 systemd[1]: Started cri-containerd-fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756.scope - libcontainer container fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756. Jan 13 20:19:43.326165 containerd[1486]: time="2025-01-13T20:19:43.326103003Z" level=info msg="StartContainer for \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\" returns successfully" Jan 13 20:19:43.340176 systemd[1]: cri-containerd-fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756.scope: Deactivated successfully. Jan 13 20:19:43.508570 containerd[1486]: time="2025-01-13T20:19:43.508375231Z" level=info msg="shim disconnected" id=fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756 namespace=k8s.io Jan 13 20:19:43.508570 containerd[1486]: time="2025-01-13T20:19:43.508436393Z" level=warning msg="cleaning up after shim disconnected" id=fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756 namespace=k8s.io Jan 13 20:19:43.508570 containerd[1486]: time="2025-01-13T20:19:43.508449554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:43.948519 containerd[1486]: time="2025-01-13T20:19:43.947309076Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:19:43.972512 containerd[1486]: time="2025-01-13T20:19:43.972385150Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\"" Jan 13 20:19:43.973300 containerd[1486]: time="2025-01-13T20:19:43.973111735Z" level=info msg="StartContainer for \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\"" Jan 13 20:19:44.007581 systemd[1]: Started cri-containerd-bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14.scope - libcontainer container bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14. Jan 13 20:19:44.039519 containerd[1486]: time="2025-01-13T20:19:44.039386145Z" level=info msg="StartContainer for \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\" returns successfully" Jan 13 20:19:44.052345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:19:44.053362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:19:44.053503 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:19:44.059884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:19:44.060181 systemd[1]: cri-containerd-bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14.scope: Deactivated successfully. Jan 13 20:19:44.089372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:19:44.097456 containerd[1486]: time="2025-01-13T20:19:44.097398077Z" level=info msg="shim disconnected" id=bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14 namespace=k8s.io Jan 13 20:19:44.097456 containerd[1486]: time="2025-01-13T20:19:44.097486520Z" level=warning msg="cleaning up after shim disconnected" id=bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14 namespace=k8s.io Jan 13 20:19:44.097456 containerd[1486]: time="2025-01-13T20:19:44.097498320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:44.256771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756-rootfs.mount: Deactivated successfully. Jan 13 20:19:44.969996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084439229.mount: Deactivated successfully. Jan 13 20:19:44.972593 containerd[1486]: time="2025-01-13T20:19:44.972459234Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:19:45.009368 containerd[1486]: time="2025-01-13T20:19:45.009034573Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\"" Jan 13 20:19:45.012062 containerd[1486]: time="2025-01-13T20:19:45.012014120Z" level=info msg="StartContainer for \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\"" Jan 13 20:19:45.050516 systemd[1]: Started cri-containerd-0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96.scope - libcontainer container 0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96. Jan 13 20:19:45.104688 containerd[1486]: time="2025-01-13T20:19:45.104466720Z" level=info msg="StartContainer for \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\" returns successfully" Jan 13 20:19:45.107604 systemd[1]: cri-containerd-0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96.scope: Deactivated successfully. Jan 13 20:19:45.146118 containerd[1486]: time="2025-01-13T20:19:45.145791484Z" level=info msg="shim disconnected" id=0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96 namespace=k8s.io Jan 13 20:19:45.146118 containerd[1486]: time="2025-01-13T20:19:45.145862766Z" level=warning msg="cleaning up after shim disconnected" id=0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96 namespace=k8s.io Jan 13 20:19:45.146118 containerd[1486]: time="2025-01-13T20:19:45.145873727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:45.163125 containerd[1486]: time="2025-01-13T20:19:45.163050544Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:19:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:19:45.255601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155402873.mount: Deactivated successfully. Jan 13 20:19:45.462137 containerd[1486]: time="2025-01-13T20:19:45.462071322Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:45.465528 containerd[1486]: time="2025-01-13T20:19:45.465440003Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138338" Jan 13 20:19:45.467413 containerd[1486]: time="2025-01-13T20:19:45.467320351Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:45.469291 containerd[1486]: time="2025-01-13T20:19:45.468563355Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.226570001s" Jan 13 20:19:45.469291 containerd[1486]: time="2025-01-13T20:19:45.468606997Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:19:45.471550 containerd[1486]: time="2025-01-13T20:19:45.471514981Z" level=info msg="CreateContainer within sandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:19:45.489797 containerd[1486]: time="2025-01-13T20:19:45.489707075Z" level=info msg="CreateContainer within sandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\"" Jan 13 20:19:45.493711 containerd[1486]: time="2025-01-13T20:19:45.491911194Z" level=info msg="StartContainer for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\"" Jan 13 20:19:45.547538 systemd[1]: Started cri-containerd-b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12.scope - libcontainer container b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12. Jan 13 20:19:45.577664 containerd[1486]: time="2025-01-13T20:19:45.577612272Z" level=info msg="StartContainer for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" returns successfully" Jan 13 20:19:45.970293 containerd[1486]: time="2025-01-13T20:19:45.968604593Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:19:45.987889 containerd[1486]: time="2025-01-13T20:19:45.987720600Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\"" Jan 13 20:19:45.989952 containerd[1486]: time="2025-01-13T20:19:45.989917079Z" level=info msg="StartContainer for \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\"" Jan 13 20:19:46.032393 systemd[1]: Started cri-containerd-6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8.scope - libcontainer container 6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8. Jan 13 20:19:46.097529 systemd[1]: cri-containerd-6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8.scope: Deactivated successfully. Jan 13 20:19:46.100755 containerd[1486]: time="2025-01-13T20:19:46.100602225Z" level=info msg="StartContainer for \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\" returns successfully" Jan 13 20:19:46.202698 containerd[1486]: time="2025-01-13T20:19:46.202363173Z" level=info msg="shim disconnected" id=6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8 namespace=k8s.io Jan 13 20:19:46.202698 containerd[1486]: time="2025-01-13T20:19:46.202498217Z" level=warning msg="cleaning up after shim disconnected" id=6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8 namespace=k8s.io Jan 13 20:19:46.202698 containerd[1486]: time="2025-01-13T20:19:46.202511098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:46.982535 containerd[1486]: time="2025-01-13T20:19:46.982320548Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:19:47.002169 kubelet[2746]: I0113 20:19:47.000853 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-nfvkf" podStartSLOduration=2.333485486 podStartE2EDuration="13.000836822s" podCreationTimestamp="2025-01-13 20:19:34 +0000 UTC" firstStartedPulling="2025-01-13 20:19:34.802498426 +0000 UTC m=+7.076583492" lastFinishedPulling="2025-01-13 20:19:45.469849722 +0000 UTC m=+17.743934828" observedRunningTime="2025-01-13 20:19:46.065826518 +0000 UTC m=+18.339911624" watchObservedRunningTime="2025-01-13 20:19:47.000836822 +0000 UTC m=+19.274921928" Jan 13 20:19:47.006578 containerd[1486]: time="2025-01-13T20:19:47.006518672Z" level=info msg="CreateContainer within sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\"" Jan 13 20:19:47.007585 containerd[1486]: time="2025-01-13T20:19:47.007345822Z" level=info msg="StartContainer for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\"" Jan 13 20:19:47.042577 systemd[1]: Started cri-containerd-4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065.scope - libcontainer container 4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065. Jan 13 20:19:47.072897 containerd[1486]: time="2025-01-13T20:19:47.072548150Z" level=info msg="StartContainer for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" returns successfully" Jan 13 20:19:47.187577 kubelet[2746]: I0113 20:19:47.187486 2746 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:19:47.235462 systemd[1]: Created slice kubepods-burstable-pod8a023d57_dc29_4e98_a7a7_ad280d056d82.slice - libcontainer container kubepods-burstable-pod8a023d57_dc29_4e98_a7a7_ad280d056d82.slice. Jan 13 20:19:47.245535 systemd[1]: Created slice kubepods-burstable-podda6431b3_f289_4ec8_bb61_480540858977.slice - libcontainer container kubepods-burstable-podda6431b3_f289_4ec8_bb61_480540858977.slice. Jan 13 20:19:47.285187 kubelet[2746]: I0113 20:19:47.285116 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6431b3-f289-4ec8-bb61-480540858977-config-volume\") pod \"coredns-6f6b679f8f-lpdth\" (UID: \"da6431b3-f289-4ec8-bb61-480540858977\") " pod="kube-system/coredns-6f6b679f8f-lpdth" Jan 13 20:19:47.285187 kubelet[2746]: I0113 20:19:47.285165 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcpvj\" (UniqueName: \"kubernetes.io/projected/da6431b3-f289-4ec8-bb61-480540858977-kube-api-access-vcpvj\") pod \"coredns-6f6b679f8f-lpdth\" (UID: \"da6431b3-f289-4ec8-bb61-480540858977\") " pod="kube-system/coredns-6f6b679f8f-lpdth" Jan 13 20:19:47.285187 kubelet[2746]: I0113 20:19:47.285193 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rfs\" (UniqueName: \"kubernetes.io/projected/8a023d57-dc29-4e98-a7a7-ad280d056d82-kube-api-access-t2rfs\") pod \"coredns-6f6b679f8f-5dr2n\" (UID: \"8a023d57-dc29-4e98-a7a7-ad280d056d82\") " pod="kube-system/coredns-6f6b679f8f-5dr2n" Jan 13 20:19:47.285393 kubelet[2746]: I0113 20:19:47.285217 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a023d57-dc29-4e98-a7a7-ad280d056d82-config-volume\") pod \"coredns-6f6b679f8f-5dr2n\" (UID: \"8a023d57-dc29-4e98-a7a7-ad280d056d82\") " pod="kube-system/coredns-6f6b679f8f-5dr2n" Jan 13 20:19:47.545560 containerd[1486]: time="2025-01-13T20:19:47.544720950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5dr2n,Uid:8a023d57-dc29-4e98-a7a7-ad280d056d82,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:47.551718 containerd[1486]: time="2025-01-13T20:19:47.551672927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lpdth,Uid:da6431b3-f289-4ec8-bb61-480540858977,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:48.007362 kubelet[2746]: I0113 20:19:48.006807 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-clvcm" podStartSLOduration=5.377249281 podStartE2EDuration="14.006787339s" podCreationTimestamp="2025-01-13 20:19:34 +0000 UTC" firstStartedPulling="2025-01-13 20:19:34.609785243 +0000 UTC m=+6.883870309" lastFinishedPulling="2025-01-13 20:19:43.239323261 +0000 UTC m=+15.513408367" observedRunningTime="2025-01-13 20:19:48.006566771 +0000 UTC m=+20.280651877" watchObservedRunningTime="2025-01-13 20:19:48.006787339 +0000 UTC m=+20.280872445" Jan 13 20:19:49.367000 systemd-networkd[1374]: cilium_host: Link UP Jan 13 20:19:49.369531 systemd-networkd[1374]: cilium_net: Link UP Jan 13 20:19:49.369783 systemd-networkd[1374]: cilium_net: Gained carrier Jan 13 20:19:49.369929 systemd-networkd[1374]: cilium_host: Gained carrier Jan 13 20:19:49.487263 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 13 20:19:49.487904 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 13 20:19:49.525410 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 13 20:19:49.782640 kernel: NET: Registered PF_ALG protocol family Jan 13 20:19:50.061544 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 13 20:19:50.534922 systemd-networkd[1374]: lxc_health: Link UP Jan 13 20:19:50.548962 systemd-networkd[1374]: lxc_health: Gained carrier Jan 13 20:19:50.767535 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 13 20:19:51.138164 systemd-networkd[1374]: lxc501feb7fd2ee: Link UP Jan 13 20:19:51.142354 kernel: eth0: renamed from tmpc44ab Jan 13 20:19:51.149492 systemd-networkd[1374]: lxc501feb7fd2ee: Gained carrier Jan 13 20:19:51.151409 systemd-networkd[1374]: lxcce8e9bc6c5d5: Link UP Jan 13 20:19:51.169135 kernel: eth0: renamed from tmpa1312 Jan 13 20:19:51.169382 systemd-networkd[1374]: lxcce8e9bc6c5d5: Gained carrier Jan 13 20:19:52.174358 systemd-networkd[1374]: lxc501feb7fd2ee: Gained IPv6LL Jan 13 20:19:52.557437 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 13 20:19:53.006042 systemd-networkd[1374]: lxcce8e9bc6c5d5: Gained IPv6LL Jan 13 20:19:55.281695 containerd[1486]: time="2025-01-13T20:19:55.281578900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:55.282491 containerd[1486]: time="2025-01-13T20:19:55.281872991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:55.282491 containerd[1486]: time="2025-01-13T20:19:55.282055679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:55.282709 containerd[1486]: time="2025-01-13T20:19:55.282609901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:55.295410 containerd[1486]: time="2025-01-13T20:19:55.295233292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:55.295712 containerd[1486]: time="2025-01-13T20:19:55.295394138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:55.295712 containerd[1486]: time="2025-01-13T20:19:55.295587106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:55.295930 containerd[1486]: time="2025-01-13T20:19:55.295836876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:55.331510 systemd[1]: Started cri-containerd-a131297843047dd5cd9ed7e9848f316b48b958cb1177ceb3b00485cae3514d56.scope - libcontainer container a131297843047dd5cd9ed7e9848f316b48b958cb1177ceb3b00485cae3514d56. Jan 13 20:19:55.334329 systemd[1]: Started cri-containerd-c44abdc700663c9f68f4dd2a946108b7300bdda7adeaa8229d515de4efcdac86.scope - libcontainer container c44abdc700663c9f68f4dd2a946108b7300bdda7adeaa8229d515de4efcdac86. Jan 13 20:19:55.389100 containerd[1486]: time="2025-01-13T20:19:55.389055005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5dr2n,Uid:8a023d57-dc29-4e98-a7a7-ad280d056d82,Namespace:kube-system,Attempt:0,} returns sandbox id \"c44abdc700663c9f68f4dd2a946108b7300bdda7adeaa8229d515de4efcdac86\"" Jan 13 20:19:55.398911 containerd[1486]: time="2025-01-13T20:19:55.398869722Z" level=info msg="CreateContainer within sandbox \"c44abdc700663c9f68f4dd2a946108b7300bdda7adeaa8229d515de4efcdac86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:19:55.405397 containerd[1486]: time="2025-01-13T20:19:55.405345423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lpdth,Uid:da6431b3-f289-4ec8-bb61-480540858977,Namespace:kube-system,Attempt:0,} returns sandbox id \"a131297843047dd5cd9ed7e9848f316b48b958cb1177ceb3b00485cae3514d56\"" Jan 13 20:19:55.413207 containerd[1486]: time="2025-01-13T20:19:55.413056655Z" level=info msg="CreateContainer within sandbox \"a131297843047dd5cd9ed7e9848f316b48b958cb1177ceb3b00485cae3514d56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:19:55.426112 containerd[1486]: time="2025-01-13T20:19:55.425981578Z" level=info msg="CreateContainer within sandbox \"c44abdc700663c9f68f4dd2a946108b7300bdda7adeaa8229d515de4efcdac86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dac9a8ff7222dc85cf5ebb131107015c6b51bbd8b74a5b0ff59091dafb199ee5\"" Jan 13 20:19:55.428400 containerd[1486]: time="2025-01-13T20:19:55.428358994Z" level=info msg="StartContainer for \"dac9a8ff7222dc85cf5ebb131107015c6b51bbd8b74a5b0ff59091dafb199ee5\"" Jan 13 20:19:55.441691 containerd[1486]: time="2025-01-13T20:19:55.441296037Z" level=info msg="CreateContainer within sandbox \"a131297843047dd5cd9ed7e9848f316b48b958cb1177ceb3b00485cae3514d56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27a20c4169e9104bd718b8d25496d11db3a1d3ae76cfd2889d2ae463a751341a\"" Jan 13 20:19:55.445698 containerd[1486]: time="2025-01-13T20:19:55.443849900Z" level=info msg="StartContainer for \"27a20c4169e9104bd718b8d25496d11db3a1d3ae76cfd2889d2ae463a751341a\"" Jan 13 20:19:55.486543 systemd[1]: Started cri-containerd-dac9a8ff7222dc85cf5ebb131107015c6b51bbd8b74a5b0ff59091dafb199ee5.scope - libcontainer container dac9a8ff7222dc85cf5ebb131107015c6b51bbd8b74a5b0ff59091dafb199ee5. Jan 13 20:19:55.506906 systemd[1]: Started cri-containerd-27a20c4169e9104bd718b8d25496d11db3a1d3ae76cfd2889d2ae463a751341a.scope - libcontainer container 27a20c4169e9104bd718b8d25496d11db3a1d3ae76cfd2889d2ae463a751341a. Jan 13 20:19:55.535757 containerd[1486]: time="2025-01-13T20:19:55.535541647Z" level=info msg="StartContainer for \"dac9a8ff7222dc85cf5ebb131107015c6b51bbd8b74a5b0ff59091dafb199ee5\" returns successfully" Jan 13 20:19:55.557090 containerd[1486]: time="2025-01-13T20:19:55.557039596Z" level=info msg="StartContainer for \"27a20c4169e9104bd718b8d25496d11db3a1d3ae76cfd2889d2ae463a751341a\" returns successfully" Jan 13 20:19:56.029717 kubelet[2746]: I0113 20:19:56.028921 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lpdth" podStartSLOduration=22.028901764 podStartE2EDuration="22.028901764s" podCreationTimestamp="2025-01-13 20:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:56.026157452 +0000 UTC m=+28.300242558" watchObservedRunningTime="2025-01-13 20:19:56.028901764 +0000 UTC m=+28.302986870" Jan 13 20:19:56.076354 kubelet[2746]: I0113 20:19:56.075732 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5dr2n" podStartSLOduration=22.075712994 podStartE2EDuration="22.075712994s" podCreationTimestamp="2025-01-13 20:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:56.047882819 +0000 UTC m=+28.321967925" watchObservedRunningTime="2025-01-13 20:19:56.075712994 +0000 UTC m=+28.349798100" Jan 13 20:21:39.946881 update_engine[1467]: I20250113 20:21:39.946341 1467 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 20:21:39.946881 update_engine[1467]: I20250113 20:21:39.946417 1467 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 20:21:39.946881 update_engine[1467]: I20250113 20:21:39.946789 1467 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 20:21:39.947887 update_engine[1467]: I20250113 20:21:39.947829 1467 omaha_request_params.cc:62] Current group set to beta Jan 13 20:21:39.948421 update_engine[1467]: I20250113 20:21:39.948200 1467 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 20:21:39.948421 update_engine[1467]: I20250113 20:21:39.948240 1467 update_attempter.cc:643] Scheduling an action processor start. Jan 13 20:21:39.948421 update_engine[1467]: I20250113 20:21:39.948297 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:21:39.948421 update_engine[1467]: I20250113 20:21:39.948348 1467 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 20:21:39.948570 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 20:21:39.948875 update_engine[1467]: I20250113 20:21:39.948446 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:21:39.948875 update_engine[1467]: I20250113 20:21:39.948463 1467 omaha_request_action.cc:272] Request: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: Jan 13 20:21:39.948875 update_engine[1467]: I20250113 20:21:39.948475 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:21:39.950544 update_engine[1467]: I20250113 20:21:39.950487 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:21:39.950955 update_engine[1467]: I20250113 20:21:39.950907 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:21:39.953010 update_engine[1467]: E20250113 20:21:39.952956 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:21:39.953115 update_engine[1467]: I20250113 20:21:39.953037 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 20:21:49.859299 update_engine[1467]: I20250113 20:21:49.857384 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:21:49.859299 update_engine[1467]: I20250113 20:21:49.857681 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:21:49.859299 update_engine[1467]: I20250113 20:21:49.857944 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:21:49.859299 update_engine[1467]: E20250113 20:21:49.858419 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:21:49.859299 update_engine[1467]: I20250113 20:21:49.858472 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 20:21:59.856364 update_engine[1467]: I20250113 20:21:59.856292 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:21:59.858132 update_engine[1467]: I20250113 20:21:59.856532 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:21:59.858132 update_engine[1467]: I20250113 20:21:59.856782 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:21:59.858132 update_engine[1467]: E20250113 20:21:59.857231 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:21:59.858132 update_engine[1467]: I20250113 20:21:59.857298 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 20:22:04.538664 systemd[1]: Started sshd@7-138.199.153.210:22-5.101.0.66:60000.service - OpenSSH per-connection server daemon (5.101.0.66:60000). Jan 13 20:22:04.680860 sshd[4149]: Connection closed by 5.101.0.66 port 60000 Jan 13 20:22:04.681959 systemd[1]: sshd@7-138.199.153.210:22-5.101.0.66:60000.service: Deactivated successfully. Jan 13 20:22:09.856385 update_engine[1467]: I20250113 20:22:09.856307 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:22:09.856875 update_engine[1467]: I20250113 20:22:09.856565 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:22:09.856875 update_engine[1467]: I20250113 20:22:09.856848 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:22:09.857427 update_engine[1467]: E20250113 20:22:09.857379 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:22:09.857535 update_engine[1467]: I20250113 20:22:09.857514 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:22:09.857535 update_engine[1467]: I20250113 20:22:09.857528 1467 omaha_request_action.cc:617] Omaha request response: Jan 13 20:22:09.857647 update_engine[1467]: E20250113 20:22:09.857625 1467 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 20:22:09.857732 update_engine[1467]: I20250113 20:22:09.857652 1467 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 20:22:09.857732 update_engine[1467]: I20250113 20:22:09.857712 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:22:09.857732 update_engine[1467]: I20250113 20:22:09.857720 1467 update_attempter.cc:306] Processing Done. Jan 13 20:22:09.857820 update_engine[1467]: E20250113 20:22:09.857738 1467 update_attempter.cc:619] Update failed. Jan 13 20:22:09.857820 update_engine[1467]: I20250113 20:22:09.857744 1467 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 20:22:09.857820 update_engine[1467]: I20250113 20:22:09.857749 1467 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 20:22:09.857820 update_engine[1467]: I20250113 20:22:09.857756 1467 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 20:22:09.857941 update_engine[1467]: I20250113 20:22:09.857831 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:22:09.857941 update_engine[1467]: I20250113 20:22:09.857857 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:22:09.857941 update_engine[1467]: I20250113 20:22:09.857883 1467 omaha_request_action.cc:272] Request: Jan 13 20:22:09.857941 update_engine[1467]: Jan 13 20:22:09.857941 update_engine[1467]: Jan 13 20:22:09.857941 update_engine[1467]: Jan 13 20:22:09.857941 update_engine[1467]: Jan 13 20:22:09.857941 update_engine[1467]: Jan 13 20:22:09.857941 update_engine[1467]: Jan 13 20:22:09.857941 update_engine[1467]: I20250113 20:22:09.857892 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:22:09.858181 update_engine[1467]: I20250113 20:22:09.858085 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:22:09.858523 update_engine[1467]: I20250113 20:22:09.858322 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:22:09.858670 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 20:22:09.859103 update_engine[1467]: E20250113 20:22:09.858651 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858722 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858731 1467 omaha_request_action.cc:617] Omaha request response: Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858738 1467 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858744 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858752 1467 update_attempter.cc:306] Processing Done. Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858757 1467 update_attempter.cc:310] Error event sent. Jan 13 20:22:09.859103 update_engine[1467]: I20250113 20:22:09.858767 1467 update_check_scheduler.cc:74] Next update check in 44m10s Jan 13 20:22:09.859504 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 20:22:23.495453 systemd[1]: Started sshd@8-138.199.153.210:22-5.101.0.66:53220.service - OpenSSH per-connection server daemon (5.101.0.66:53220). Jan 13 20:22:23.573128 sshd[4155]: Connection closed by 5.101.0.66 port 53220 Jan 13 20:22:23.573804 systemd[1]: sshd@8-138.199.153.210:22-5.101.0.66:53220.service: Deactivated successfully. Jan 13 20:22:23.649797 systemd[1]: Started sshd@9-138.199.153.210:22-5.101.0.66:38291.service - OpenSSH per-connection server daemon (5.101.0.66:38291). Jan 13 20:22:23.849744 sshd[4159]: Connection closed by 5.101.0.66 port 38291 [preauth] Jan 13 20:22:23.850620 systemd[1]: sshd@9-138.199.153.210:22-5.101.0.66:38291.service: Deactivated successfully. Jan 13 20:24:17.121069 systemd[1]: Started sshd@10-138.199.153.210:22-139.178.89.65:37026.service - OpenSSH per-connection server daemon (139.178.89.65:37026). Jan 13 20:24:18.111170 sshd[4180]: Accepted publickey for core from 139.178.89.65 port 37026 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:18.114194 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:18.120055 systemd-logind[1466]: New session 8 of user core. Jan 13 20:24:18.129509 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:24:18.908739 sshd[4182]: Connection closed by 139.178.89.65 port 37026 Jan 13 20:24:18.909866 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:18.916337 systemd[1]: sshd@10-138.199.153.210:22-139.178.89.65:37026.service: Deactivated successfully. Jan 13 20:24:18.920405 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:24:18.921364 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:24:18.922594 systemd-logind[1466]: Removed session 8. Jan 13 20:24:24.080543 systemd[1]: Started sshd@11-138.199.153.210:22-139.178.89.65:35252.service - OpenSSH per-connection server daemon (139.178.89.65:35252). Jan 13 20:24:25.079870 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 35252 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:25.081119 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:25.091411 systemd-logind[1466]: New session 9 of user core. Jan 13 20:24:25.095283 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:24:25.837488 sshd[4195]: Connection closed by 139.178.89.65 port 35252 Jan 13 20:24:25.838195 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:25.843567 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:24:25.844580 systemd[1]: sshd@11-138.199.153.210:22-139.178.89.65:35252.service: Deactivated successfully. Jan 13 20:24:25.846984 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:24:25.848416 systemd-logind[1466]: Removed session 9. Jan 13 20:24:31.010072 systemd[1]: Started sshd@12-138.199.153.210:22-139.178.89.65:35254.service - OpenSSH per-connection server daemon (139.178.89.65:35254). Jan 13 20:24:32.012628 sshd[4209]: Accepted publickey for core from 139.178.89.65 port 35254 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:32.014535 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:32.019300 systemd-logind[1466]: New session 10 of user core. Jan 13 20:24:32.033179 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:24:32.780999 sshd[4211]: Connection closed by 139.178.89.65 port 35254 Jan 13 20:24:32.781952 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:32.785602 systemd[1]: sshd@12-138.199.153.210:22-139.178.89.65:35254.service: Deactivated successfully. Jan 13 20:24:32.787736 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:24:32.789433 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:24:32.790913 systemd-logind[1466]: Removed session 10. Jan 13 20:24:37.957586 systemd[1]: Started sshd@13-138.199.153.210:22-139.178.89.65:34208.service - OpenSSH per-connection server daemon (139.178.89.65:34208). Jan 13 20:24:38.927995 sshd[4224]: Accepted publickey for core from 139.178.89.65 port 34208 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:38.930204 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:38.935925 systemd-logind[1466]: New session 11 of user core. Jan 13 20:24:38.940698 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:24:39.696360 sshd[4226]: Connection closed by 139.178.89.65 port 34208 Jan 13 20:24:39.696022 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:39.701096 systemd[1]: sshd@13-138.199.153.210:22-139.178.89.65:34208.service: Deactivated successfully. Jan 13 20:24:39.703535 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:24:39.706001 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:24:39.708023 systemd-logind[1466]: Removed session 11. Jan 13 20:24:39.874835 systemd[1]: Started sshd@14-138.199.153.210:22-139.178.89.65:34210.service - OpenSSH per-connection server daemon (139.178.89.65:34210). Jan 13 20:24:40.862596 sshd[4238]: Accepted publickey for core from 139.178.89.65 port 34210 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:40.864739 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:40.869905 systemd-logind[1466]: New session 12 of user core. Jan 13 20:24:40.875591 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:24:41.665452 sshd[4240]: Connection closed by 139.178.89.65 port 34210 Jan 13 20:24:41.666079 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:41.670199 systemd[1]: sshd@14-138.199.153.210:22-139.178.89.65:34210.service: Deactivated successfully. Jan 13 20:24:41.673925 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:24:41.676920 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:24:41.678850 systemd-logind[1466]: Removed session 12. Jan 13 20:24:41.842786 systemd[1]: Started sshd@15-138.199.153.210:22-139.178.89.65:59058.service - OpenSSH per-connection server daemon (139.178.89.65:59058). Jan 13 20:24:42.829353 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 59058 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:42.832176 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:42.837889 systemd-logind[1466]: New session 13 of user core. Jan 13 20:24:42.847665 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:24:43.589598 sshd[4251]: Connection closed by 139.178.89.65 port 59058 Jan 13 20:24:43.589096 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:43.592661 systemd[1]: sshd@15-138.199.153.210:22-139.178.89.65:59058.service: Deactivated successfully. Jan 13 20:24:43.595085 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:24:43.598010 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:24:43.600059 systemd-logind[1466]: Removed session 13. Jan 13 20:24:48.764606 systemd[1]: Started sshd@16-138.199.153.210:22-139.178.89.65:59060.service - OpenSSH per-connection server daemon (139.178.89.65:59060). Jan 13 20:24:49.740308 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 59060 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:49.741599 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:49.748333 systemd-logind[1466]: New session 14 of user core. Jan 13 20:24:49.752499 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:24:50.493422 sshd[4264]: Connection closed by 139.178.89.65 port 59060 Jan 13 20:24:50.494588 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:50.500758 systemd[1]: sshd@16-138.199.153.210:22-139.178.89.65:59060.service: Deactivated successfully. Jan 13 20:24:50.503521 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:24:50.505227 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:24:50.506612 systemd-logind[1466]: Removed session 14. Jan 13 20:24:50.669135 systemd[1]: Started sshd@17-138.199.153.210:22-139.178.89.65:59068.service - OpenSSH per-connection server daemon (139.178.89.65:59068). Jan 13 20:24:51.652389 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 59068 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:51.654886 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:51.662224 systemd-logind[1466]: New session 15 of user core. Jan 13 20:24:51.668537 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:24:52.476892 sshd[4277]: Connection closed by 139.178.89.65 port 59068 Jan 13 20:24:52.475898 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:52.481546 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:24:52.482219 systemd[1]: sshd@17-138.199.153.210:22-139.178.89.65:59068.service: Deactivated successfully. Jan 13 20:24:52.485649 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:24:52.487194 systemd-logind[1466]: Removed session 15. Jan 13 20:24:52.651172 systemd[1]: Started sshd@18-138.199.153.210:22-139.178.89.65:37338.service - OpenSSH per-connection server daemon (139.178.89.65:37338). Jan 13 20:24:53.661182 sshd[4285]: Accepted publickey for core from 139.178.89.65 port 37338 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:53.661747 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:53.668332 systemd-logind[1466]: New session 16 of user core. Jan 13 20:24:53.677638 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:24:56.071453 sshd[4287]: Connection closed by 139.178.89.65 port 37338 Jan 13 20:24:56.072730 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:56.076794 systemd[1]: sshd@18-138.199.153.210:22-139.178.89.65:37338.service: Deactivated successfully. Jan 13 20:24:56.079575 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:24:56.082959 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:24:56.084149 systemd-logind[1466]: Removed session 16. Jan 13 20:24:56.246637 systemd[1]: Started sshd@19-138.199.153.210:22-139.178.89.65:37354.service - OpenSSH per-connection server daemon (139.178.89.65:37354). Jan 13 20:24:57.237623 sshd[4304]: Accepted publickey for core from 139.178.89.65 port 37354 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:57.239393 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:57.246597 systemd-logind[1466]: New session 17 of user core. Jan 13 20:24:57.251483 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:24:58.142726 sshd[4306]: Connection closed by 139.178.89.65 port 37354 Jan 13 20:24:58.143602 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:58.148563 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:24:58.150033 systemd[1]: sshd@19-138.199.153.210:22-139.178.89.65:37354.service: Deactivated successfully. Jan 13 20:24:58.152705 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:24:58.155822 systemd-logind[1466]: Removed session 17. Jan 13 20:24:58.325766 systemd[1]: Started sshd@20-138.199.153.210:22-139.178.89.65:37356.service - OpenSSH per-connection server daemon (139.178.89.65:37356). Jan 13 20:24:59.315900 sshd[4315]: Accepted publickey for core from 139.178.89.65 port 37356 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:59.319071 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:59.324822 systemd-logind[1466]: New session 18 of user core. Jan 13 20:24:59.334714 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:25:00.079435 sshd[4317]: Connection closed by 139.178.89.65 port 37356 Jan 13 20:25:00.080221 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:00.085069 systemd[1]: sshd@20-138.199.153.210:22-139.178.89.65:37356.service: Deactivated successfully. Jan 13 20:25:00.089744 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:25:00.090785 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:25:00.091973 systemd-logind[1466]: Removed session 18. Jan 13 20:25:05.258649 systemd[1]: Started sshd@21-138.199.153.210:22-139.178.89.65:49372.service - OpenSSH per-connection server daemon (139.178.89.65:49372). Jan 13 20:25:06.252897 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 49372 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:25:06.255358 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:06.260497 systemd-logind[1466]: New session 19 of user core. Jan 13 20:25:06.268995 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:25:06.998801 sshd[4335]: Connection closed by 139.178.89.65 port 49372 Jan 13 20:25:06.999677 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:07.004049 systemd[1]: sshd@21-138.199.153.210:22-139.178.89.65:49372.service: Deactivated successfully. Jan 13 20:25:07.007565 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:25:07.016656 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:25:07.019508 systemd-logind[1466]: Removed session 19. Jan 13 20:25:12.165549 systemd[1]: Started sshd@22-138.199.153.210:22-139.178.89.65:51044.service - OpenSSH per-connection server daemon (139.178.89.65:51044). Jan 13 20:25:13.146176 sshd[4346]: Accepted publickey for core from 139.178.89.65 port 51044 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:25:13.148650 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:13.156329 systemd-logind[1466]: New session 20 of user core. Jan 13 20:25:13.160559 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:25:13.895142 sshd[4348]: Connection closed by 139.178.89.65 port 51044 Jan 13 20:25:13.896038 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:13.900329 systemd[1]: sshd@22-138.199.153.210:22-139.178.89.65:51044.service: Deactivated successfully. Jan 13 20:25:13.902769 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:25:13.905081 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:25:13.907422 systemd-logind[1466]: Removed session 20. Jan 13 20:25:14.074592 systemd[1]: Started sshd@23-138.199.153.210:22-139.178.89.65:51058.service - OpenSSH per-connection server daemon (139.178.89.65:51058). Jan 13 20:25:15.067111 sshd[4359]: Accepted publickey for core from 139.178.89.65 port 51058 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:25:15.069033 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:15.074323 systemd-logind[1466]: New session 21 of user core. Jan 13 20:25:15.083589 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:25:17.232980 containerd[1486]: time="2025-01-13T20:25:17.232910203Z" level=info msg="StopContainer for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" with timeout 30 (s)" Jan 13 20:25:17.235602 containerd[1486]: time="2025-01-13T20:25:17.235557151Z" level=info msg="Stop container \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" with signal terminated" Jan 13 20:25:17.257997 systemd[1]: cri-containerd-b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12.scope: Deactivated successfully. Jan 13 20:25:17.267455 containerd[1486]: time="2025-01-13T20:25:17.267135134Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:25:17.275849 containerd[1486]: time="2025-01-13T20:25:17.275373254Z" level=info msg="StopContainer for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" with timeout 2 (s)" Jan 13 20:25:17.276271 containerd[1486]: time="2025-01-13T20:25:17.276081360Z" level=info msg="Stop container \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" with signal terminated" Jan 13 20:25:17.285718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12-rootfs.mount: Deactivated successfully. Jan 13 20:25:17.290574 systemd-networkd[1374]: lxc_health: Link DOWN Jan 13 20:25:17.290583 systemd-networkd[1374]: lxc_health: Lost carrier Jan 13 20:25:17.305515 containerd[1486]: time="2025-01-13T20:25:17.305401228Z" level=info msg="shim disconnected" id=b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12 namespace=k8s.io Jan 13 20:25:17.305515 containerd[1486]: time="2025-01-13T20:25:17.305503546Z" level=warning msg="cleaning up after shim disconnected" id=b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12 namespace=k8s.io Jan 13 20:25:17.305515 containerd[1486]: time="2025-01-13T20:25:17.305514705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:17.313681 systemd[1]: cri-containerd-4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065.scope: Deactivated successfully. Jan 13 20:25:17.313940 systemd[1]: cri-containerd-4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065.scope: Consumed 8.123s CPU time. Jan 13 20:25:17.329476 containerd[1486]: time="2025-01-13T20:25:17.328293501Z" level=info msg="StopContainer for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" returns successfully" Jan 13 20:25:17.330328 containerd[1486]: time="2025-01-13T20:25:17.330146984Z" level=info msg="StopPodSandbox for \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\"" Jan 13 20:25:17.330328 containerd[1486]: time="2025-01-13T20:25:17.330210503Z" level=info msg="Container to stop \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:17.332015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143-shm.mount: Deactivated successfully. Jan 13 20:25:17.342926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065-rootfs.mount: Deactivated successfully. Jan 13 20:25:17.344593 systemd[1]: cri-containerd-592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143.scope: Deactivated successfully. Jan 13 20:25:17.356864 containerd[1486]: time="2025-01-13T20:25:17.356630268Z" level=info msg="shim disconnected" id=4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065 namespace=k8s.io Jan 13 20:25:17.356864 containerd[1486]: time="2025-01-13T20:25:17.356723866Z" level=warning msg="cleaning up after shim disconnected" id=4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065 namespace=k8s.io Jan 13 20:25:17.356864 containerd[1486]: time="2025-01-13T20:25:17.356735265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:17.380913 containerd[1486]: time="2025-01-13T20:25:17.380718837Z" level=info msg="shim disconnected" id=592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143 namespace=k8s.io Jan 13 20:25:17.380913 containerd[1486]: time="2025-01-13T20:25:17.380773236Z" level=warning msg="cleaning up after shim disconnected" id=592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143 namespace=k8s.io Jan 13 20:25:17.380913 containerd[1486]: time="2025-01-13T20:25:17.380781316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:17.384892 containerd[1486]: time="2025-01-13T20:25:17.384853317Z" level=info msg="StopContainer for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" returns successfully" Jan 13 20:25:17.385784 containerd[1486]: time="2025-01-13T20:25:17.385736739Z" level=info msg="StopPodSandbox for \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\"" Jan 13 20:25:17.385979 containerd[1486]: time="2025-01-13T20:25:17.385958575Z" level=info msg="Container to stop \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:17.386043 containerd[1486]: time="2025-01-13T20:25:17.386029974Z" level=info msg="Container to stop \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:17.386094 containerd[1486]: time="2025-01-13T20:25:17.386081613Z" level=info msg="Container to stop \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:17.386151 containerd[1486]: time="2025-01-13T20:25:17.386136732Z" level=info msg="Container to stop \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:17.386203 containerd[1486]: time="2025-01-13T20:25:17.386189291Z" level=info msg="Container to stop \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:17.392397 systemd[1]: cri-containerd-2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3.scope: Deactivated successfully. Jan 13 20:25:17.398314 containerd[1486]: time="2025-01-13T20:25:17.398237615Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:25:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:25:17.399836 containerd[1486]: time="2025-01-13T20:25:17.399753626Z" level=info msg="TearDown network for sandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" successfully" Jan 13 20:25:17.399836 containerd[1486]: time="2025-01-13T20:25:17.399787305Z" level=info msg="StopPodSandbox for \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" returns successfully" Jan 13 20:25:17.437105 containerd[1486]: time="2025-01-13T20:25:17.436985459Z" level=info msg="shim disconnected" id=2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3 namespace=k8s.io Jan 13 20:25:17.437105 containerd[1486]: time="2025-01-13T20:25:17.437065737Z" level=warning msg="cleaning up after shim disconnected" id=2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3 namespace=k8s.io Jan 13 20:25:17.437105 containerd[1486]: time="2025-01-13T20:25:17.437082977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:17.449982 containerd[1486]: time="2025-01-13T20:25:17.449932326Z" level=info msg="TearDown network for sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" successfully" Jan 13 20:25:17.450175 containerd[1486]: time="2025-01-13T20:25:17.450159002Z" level=info msg="StopPodSandbox for \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" returns successfully" Jan 13 20:25:17.498927 kubelet[2746]: I0113 20:25:17.496514 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-kernel\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.498927 kubelet[2746]: I0113 20:25:17.496583 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t49nj\" (UniqueName: \"kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-kube-api-access-t49nj\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.498927 kubelet[2746]: I0113 20:25:17.496614 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-bpf-maps\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.498927 kubelet[2746]: I0113 20:25:17.496639 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cni-path\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.498927 kubelet[2746]: I0113 20:25:17.496668 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-clustermesh-secrets\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.498927 kubelet[2746]: I0113 20:25:17.496693 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hubble-tls\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.499770 kubelet[2746]: I0113 20:25:17.496719 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35b59716-ba8a-47a0-ae95-e79ffe08df12-cilium-config-path\") pod \"35b59716-ba8a-47a0-ae95-e79ffe08df12\" (UID: \"35b59716-ba8a-47a0-ae95-e79ffe08df12\") " Jan 13 20:25:17.499770 kubelet[2746]: I0113 20:25:17.496743 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-run\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.499770 kubelet[2746]: I0113 20:25:17.496767 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-etc-cni-netd\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.499770 kubelet[2746]: I0113 20:25:17.496789 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-xtables-lock\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.499770 kubelet[2746]: I0113 20:25:17.496812 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-net\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.499770 kubelet[2746]: I0113 20:25:17.496838 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc6n5\" (UniqueName: \"kubernetes.io/projected/35b59716-ba8a-47a0-ae95-e79ffe08df12-kube-api-access-bc6n5\") pod \"35b59716-ba8a-47a0-ae95-e79ffe08df12\" (UID: \"35b59716-ba8a-47a0-ae95-e79ffe08df12\") " Jan 13 20:25:17.500053 kubelet[2746]: I0113 20:25:17.496863 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-cgroup\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.500053 kubelet[2746]: I0113 20:25:17.496885 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hostproc\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.500053 kubelet[2746]: I0113 20:25:17.496909 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-lib-modules\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.500053 kubelet[2746]: I0113 20:25:17.496937 2746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-config-path\") pod \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\" (UID: \"1f4f1b2b-b677-4d4c-89b8-e7a095a1db67\") " Jan 13 20:25:17.500053 kubelet[2746]: I0113 20:25:17.498344 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.500053 kubelet[2746]: I0113 20:25:17.498474 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.502202 kubelet[2746]: I0113 20:25:17.500033 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:25:17.502202 kubelet[2746]: I0113 20:25:17.500114 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.502202 kubelet[2746]: I0113 20:25:17.500143 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.502202 kubelet[2746]: I0113 20:25:17.500167 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.502202 kubelet[2746]: I0113 20:25:17.501519 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.502514 kubelet[2746]: I0113 20:25:17.501571 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hostproc" (OuterVolumeSpecName: "hostproc") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.502514 kubelet[2746]: I0113 20:25:17.501588 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.504907 kubelet[2746]: I0113 20:25:17.504786 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.504907 kubelet[2746]: I0113 20:25:17.504845 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cni-path" (OuterVolumeSpecName: "cni-path") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:17.506509 kubelet[2746]: I0113 20:25:17.506199 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b59716-ba8a-47a0-ae95-e79ffe08df12-kube-api-access-bc6n5" (OuterVolumeSpecName: "kube-api-access-bc6n5") pod "35b59716-ba8a-47a0-ae95-e79ffe08df12" (UID: "35b59716-ba8a-47a0-ae95-e79ffe08df12"). InnerVolumeSpecName "kube-api-access-bc6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:17.506953 kubelet[2746]: I0113 20:25:17.506921 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:17.507010 kubelet[2746]: I0113 20:25:17.506984 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-kube-api-access-t49nj" (OuterVolumeSpecName: "kube-api-access-t49nj") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "kube-api-access-t49nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:17.507534 kubelet[2746]: I0113 20:25:17.507500 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" (UID: "1f4f1b2b-b677-4d4c-89b8-e7a095a1db67"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:25:17.508426 kubelet[2746]: I0113 20:25:17.508389 2746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35b59716-ba8a-47a0-ae95-e79ffe08df12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35b59716-ba8a-47a0-ae95-e79ffe08df12" (UID: "35b59716-ba8a-47a0-ae95-e79ffe08df12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:25:17.598242 kubelet[2746]: I0113 20:25:17.598145 2746 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-xtables-lock\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598242 kubelet[2746]: I0113 20:25:17.598200 2746 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-net\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598242 kubelet[2746]: I0113 20:25:17.598218 2746 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bc6n5\" (UniqueName: \"kubernetes.io/projected/35b59716-ba8a-47a0-ae95-e79ffe08df12-kube-api-access-bc6n5\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598242 kubelet[2746]: I0113 20:25:17.598231 2746 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-cgroup\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598242 kubelet[2746]: I0113 20:25:17.598246 2746 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hostproc\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598242 kubelet[2746]: I0113 20:25:17.598282 2746 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-lib-modules\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598295 2746 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-config-path\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598308 2746 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-host-proc-sys-kernel\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598320 2746 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t49nj\" (UniqueName: \"kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-kube-api-access-t49nj\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598332 2746 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-bpf-maps\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598343 2746 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cni-path\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598362 2746 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-clustermesh-secrets\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598377 2746 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-hubble-tls\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.598749 kubelet[2746]: I0113 20:25:17.598390 2746 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-cilium-run\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.599116 kubelet[2746]: I0113 20:25:17.598401 2746 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67-etc-cni-netd\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.599116 kubelet[2746]: I0113 20:25:17.598442 2746 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35b59716-ba8a-47a0-ae95-e79ffe08df12-cilium-config-path\") on node \"ci-4186-1-0-7-7ab547e2a5\" DevicePath \"\"" Jan 13 20:25:17.801453 kubelet[2746]: I0113 20:25:17.800396 2746 scope.go:117] "RemoveContainer" containerID="4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065" Jan 13 20:25:17.809589 containerd[1486]: time="2025-01-13T20:25:17.809551427Z" level=info msg="RemoveContainer for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\"" Jan 13 20:25:17.817020 systemd[1]: Removed slice kubepods-burstable-pod1f4f1b2b_b677_4d4c_89b8_e7a095a1db67.slice - libcontainer container kubepods-burstable-pod1f4f1b2b_b677_4d4c_89b8_e7a095a1db67.slice. Jan 13 20:25:17.817462 systemd[1]: kubepods-burstable-pod1f4f1b2b_b677_4d4c_89b8_e7a095a1db67.slice: Consumed 8.215s CPU time. Jan 13 20:25:17.819006 containerd[1486]: time="2025-01-13T20:25:17.818736207Z" level=info msg="RemoveContainer for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" returns successfully" Jan 13 20:25:17.819782 kubelet[2746]: I0113 20:25:17.819385 2746 scope.go:117] "RemoveContainer" containerID="6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8" Jan 13 20:25:17.823979 systemd[1]: Removed slice kubepods-besteffort-pod35b59716_ba8a_47a0_ae95_e79ffe08df12.slice - libcontainer container kubepods-besteffort-pod35b59716_ba8a_47a0_ae95_e79ffe08df12.slice. Jan 13 20:25:17.825648 containerd[1486]: time="2025-01-13T20:25:17.825312879Z" level=info msg="RemoveContainer for \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\"" Jan 13 20:25:17.831830 containerd[1486]: time="2025-01-13T20:25:17.831770193Z" level=info msg="RemoveContainer for \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\" returns successfully" Jan 13 20:25:17.832147 kubelet[2746]: I0113 20:25:17.832123 2746 scope.go:117] "RemoveContainer" containerID="0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96" Jan 13 20:25:17.835830 containerd[1486]: time="2025-01-13T20:25:17.835057209Z" level=info msg="RemoveContainer for \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\"" Jan 13 20:25:17.839619 containerd[1486]: time="2025-01-13T20:25:17.839050051Z" level=info msg="RemoveContainer for \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\" returns successfully" Jan 13 20:25:17.840345 kubelet[2746]: I0113 20:25:17.840323 2746 scope.go:117] "RemoveContainer" containerID="bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14" Jan 13 20:25:17.845641 containerd[1486]: time="2025-01-13T20:25:17.843733039Z" level=info msg="RemoveContainer for \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\"" Jan 13 20:25:17.852524 containerd[1486]: time="2025-01-13T20:25:17.852483429Z" level=info msg="RemoveContainer for \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\" returns successfully" Jan 13 20:25:17.852949 kubelet[2746]: I0113 20:25:17.852917 2746 scope.go:117] "RemoveContainer" containerID="fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756" Jan 13 20:25:17.858230 kubelet[2746]: I0113 20:25:17.858177 2746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" path="/var/lib/kubelet/pods/1f4f1b2b-b677-4d4c-89b8-e7a095a1db67/volumes" Jan 13 20:25:17.860108 containerd[1486]: time="2025-01-13T20:25:17.858630549Z" level=info msg="RemoveContainer for \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\"" Jan 13 20:25:17.867014 containerd[1486]: time="2025-01-13T20:25:17.866969026Z" level=info msg="RemoveContainer for \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\" returns successfully" Jan 13 20:25:17.867481 kubelet[2746]: I0113 20:25:17.867454 2746 scope.go:117] "RemoveContainer" containerID="4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065" Jan 13 20:25:17.868141 containerd[1486]: time="2025-01-13T20:25:17.868055325Z" level=error msg="ContainerStatus for \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\": not found" Jan 13 20:25:17.868529 kubelet[2746]: E0113 20:25:17.868237 2746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\": not found" containerID="4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065" Jan 13 20:25:17.868529 kubelet[2746]: I0113 20:25:17.868286 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065"} err="failed to get container status \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dcbbb64cf3acf9347b77691c2900444c7176679f2ffeed0105c85d4ffe22065\": not found" Jan 13 20:25:17.868529 kubelet[2746]: I0113 20:25:17.868377 2746 scope.go:117] "RemoveContainer" containerID="6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8" Jan 13 20:25:17.868829 containerd[1486]: time="2025-01-13T20:25:17.868748111Z" level=error msg="ContainerStatus for \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\": not found" Jan 13 20:25:17.869057 kubelet[2746]: E0113 20:25:17.868979 2746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\": not found" containerID="6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8" Jan 13 20:25:17.869057 kubelet[2746]: I0113 20:25:17.869006 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8"} err="failed to get container status \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fd35f85d848de2dbbf2f3365f690b0b8af98756c67389b6c0bf5cd20742f1f8\": not found" Jan 13 20:25:17.869057 kubelet[2746]: I0113 20:25:17.869023 2746 scope.go:117] "RemoveContainer" containerID="0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96" Jan 13 20:25:17.869566 containerd[1486]: time="2025-01-13T20:25:17.869482377Z" level=error msg="ContainerStatus for \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\": not found" Jan 13 20:25:17.869679 kubelet[2746]: E0113 20:25:17.869636 2746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\": not found" containerID="0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96" Jan 13 20:25:17.869679 kubelet[2746]: I0113 20:25:17.869659 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96"} err="failed to get container status \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f5f06ddb4d45bdfc509db15258307c12f0961beb2412dbe58c48193e766ce96\": not found" Jan 13 20:25:17.869679 kubelet[2746]: I0113 20:25:17.869674 2746 scope.go:117] "RemoveContainer" containerID="bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14" Jan 13 20:25:17.869970 containerd[1486]: time="2025-01-13T20:25:17.869847450Z" level=error msg="ContainerStatus for \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\": not found" Jan 13 20:25:17.870563 kubelet[2746]: E0113 20:25:17.870423 2746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\": not found" containerID="bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14" Jan 13 20:25:17.870563 kubelet[2746]: I0113 20:25:17.870494 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14"} err="failed to get container status \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb30da78aeadfd88c8aa76b9555a79818c017d4910f480cde3f85f118a9b4a14\": not found" Jan 13 20:25:17.870563 kubelet[2746]: I0113 20:25:17.870510 2746 scope.go:117] "RemoveContainer" containerID="fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756" Jan 13 20:25:17.871099 containerd[1486]: time="2025-01-13T20:25:17.870904789Z" level=error msg="ContainerStatus for \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\": not found" Jan 13 20:25:17.871161 kubelet[2746]: E0113 20:25:17.871057 2746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\": not found" containerID="fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756" Jan 13 20:25:17.871161 kubelet[2746]: I0113 20:25:17.871077 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756"} err="failed to get container status \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd507ec98df8729065c6a134fe9d3ceb133e6f9746fddda69cf846c8c66d2756\": not found" Jan 13 20:25:17.871326 kubelet[2746]: I0113 20:25:17.871224 2746 scope.go:117] "RemoveContainer" containerID="b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12" Jan 13 20:25:17.874526 containerd[1486]: time="2025-01-13T20:25:17.874481919Z" level=info msg="RemoveContainer for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\"" Jan 13 20:25:17.881210 containerd[1486]: time="2025-01-13T20:25:17.879239346Z" level=info msg="RemoveContainer for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" returns successfully" Jan 13 20:25:17.882013 kubelet[2746]: I0113 20:25:17.881646 2746 scope.go:117] "RemoveContainer" containerID="b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12" Jan 13 20:25:17.882088 containerd[1486]: time="2025-01-13T20:25:17.881940054Z" level=error msg="ContainerStatus for \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\": not found" Jan 13 20:25:17.884586 kubelet[2746]: E0113 20:25:17.884523 2746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\": not found" containerID="b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12" Jan 13 20:25:17.884758 kubelet[2746]: I0113 20:25:17.884727 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12"} err="failed to get container status \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6796f0e79eb5260edb924f15f644ce58ab38ba98cb12df2d480954519115b12\": not found" Jan 13 20:25:18.036670 kubelet[2746]: E0113 20:25:18.036570 2746 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:25:18.247956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143-rootfs.mount: Deactivated successfully. Jan 13 20:25:18.248070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3-rootfs.mount: Deactivated successfully. Jan 13 20:25:18.248123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3-shm.mount: Deactivated successfully. Jan 13 20:25:18.248187 systemd[1]: var-lib-kubelet-pods-35b59716\x2dba8a\x2d47a0\x2dae95\x2de79ffe08df12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbc6n5.mount: Deactivated successfully. Jan 13 20:25:18.248248 systemd[1]: var-lib-kubelet-pods-1f4f1b2b\x2db677\x2d4d4c\x2d89b8\x2de7a095a1db67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt49nj.mount: Deactivated successfully. Jan 13 20:25:18.248333 systemd[1]: var-lib-kubelet-pods-1f4f1b2b\x2db677\x2d4d4c\x2d89b8\x2de7a095a1db67-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:25:18.248490 systemd[1]: var-lib-kubelet-pods-1f4f1b2b\x2db677\x2d4d4c\x2d89b8\x2de7a095a1db67-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:25:19.330339 sshd[4361]: Connection closed by 139.178.89.65 port 51058 Jan 13 20:25:19.331049 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:19.334876 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:25:19.335237 systemd[1]: sshd@23-138.199.153.210:22-139.178.89.65:51058.service: Deactivated successfully. Jan 13 20:25:19.338765 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:25:19.339013 systemd[1]: session-21.scope: Consumed 1.002s CPU time. Jan 13 20:25:19.342966 systemd-logind[1466]: Removed session 21. Jan 13 20:25:19.510938 systemd[1]: Started sshd@24-138.199.153.210:22-139.178.89.65:51066.service - OpenSSH per-connection server daemon (139.178.89.65:51066). Jan 13 20:25:19.859276 kubelet[2746]: I0113 20:25:19.857562 2746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b59716-ba8a-47a0-ae95-e79ffe08df12" path="/var/lib/kubelet/pods/35b59716-ba8a-47a0-ae95-e79ffe08df12/volumes" Jan 13 20:25:20.514605 sshd[4520]: Accepted publickey for core from 139.178.89.65 port 51066 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:25:20.517831 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:20.529516 systemd-logind[1466]: New session 22 of user core. Jan 13 20:25:20.535733 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:25:22.701083 kubelet[2746]: E0113 20:25:22.699184 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" containerName="clean-cilium-state" Jan 13 20:25:22.701083 kubelet[2746]: E0113 20:25:22.699224 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" containerName="cilium-agent" Jan 13 20:25:22.701083 kubelet[2746]: E0113 20:25:22.699231 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" containerName="mount-cgroup" Jan 13 20:25:22.701083 kubelet[2746]: E0113 20:25:22.699237 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" containerName="apply-sysctl-overwrites" Jan 13 20:25:22.701083 kubelet[2746]: E0113 20:25:22.699243 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" containerName="mount-bpf-fs" Jan 13 20:25:22.701083 kubelet[2746]: E0113 20:25:22.699248 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35b59716-ba8a-47a0-ae95-e79ffe08df12" containerName="cilium-operator" Jan 13 20:25:22.701083 kubelet[2746]: I0113 20:25:22.699310 2746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4f1b2b-b677-4d4c-89b8-e7a095a1db67" containerName="cilium-agent" Jan 13 20:25:22.701083 kubelet[2746]: I0113 20:25:22.699320 2746 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b59716-ba8a-47a0-ae95-e79ffe08df12" containerName="cilium-operator" Jan 13 20:25:22.715568 systemd[1]: Created slice kubepods-burstable-poddfbb4c01_53b0_4c5d_9fad_b63e7983bc3f.slice - libcontainer container kubepods-burstable-poddfbb4c01_53b0_4c5d_9fad_b63e7983bc3f.slice. Jan 13 20:25:22.733580 kubelet[2746]: I0113 20:25:22.732991 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-cilium-config-path\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733580 kubelet[2746]: I0113 20:25:22.733049 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-host-proc-sys-kernel\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733580 kubelet[2746]: I0113 20:25:22.733075 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-bpf-maps\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733580 kubelet[2746]: I0113 20:25:22.733099 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-cni-path\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733580 kubelet[2746]: I0113 20:25:22.733118 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkfn\" (UniqueName: \"kubernetes.io/projected/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-kube-api-access-7lkfn\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733580 kubelet[2746]: I0113 20:25:22.733139 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-cilium-run\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733882 kubelet[2746]: I0113 20:25:22.733156 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-etc-cni-netd\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733882 kubelet[2746]: I0113 20:25:22.733176 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-xtables-lock\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733882 kubelet[2746]: I0113 20:25:22.733195 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-clustermesh-secrets\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733882 kubelet[2746]: I0113 20:25:22.733216 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-cilium-cgroup\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733882 kubelet[2746]: I0113 20:25:22.733265 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-cilium-ipsec-secrets\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.733882 kubelet[2746]: I0113 20:25:22.733307 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-lib-modules\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.734060 kubelet[2746]: I0113 20:25:22.733330 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-hostproc\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.734060 kubelet[2746]: I0113 20:25:22.733350 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-host-proc-sys-net\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.734060 kubelet[2746]: I0113 20:25:22.733370 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f-hubble-tls\") pod \"cilium-mqjjm\" (UID: \"dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f\") " pod="kube-system/cilium-mqjjm" Jan 13 20:25:22.863998 sshd[4522]: Connection closed by 139.178.89.65 port 51066 Jan 13 20:25:22.865170 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:22.872166 systemd[1]: sshd@24-138.199.153.210:22-139.178.89.65:51066.service: Deactivated successfully. Jan 13 20:25:22.875086 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:25:22.875480 systemd[1]: session-22.scope: Consumed 1.522s CPU time. Jan 13 20:25:22.876356 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:25:22.879234 systemd-logind[1466]: Removed session 22. Jan 13 20:25:23.023967 containerd[1486]: time="2025-01-13T20:25:23.023406712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqjjm,Uid:dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f,Namespace:kube-system,Attempt:0,}" Jan 13 20:25:23.038645 systemd[1]: Started sshd@25-138.199.153.210:22-139.178.89.65:35456.service - OpenSSH per-connection server daemon (139.178.89.65:35456). Jan 13 20:25:23.046086 kubelet[2746]: E0113 20:25:23.044958 2746 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:25:23.059092 containerd[1486]: time="2025-01-13T20:25:23.058986945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:23.060146 containerd[1486]: time="2025-01-13T20:25:23.059950888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:23.060146 containerd[1486]: time="2025-01-13T20:25:23.059974448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:23.060146 containerd[1486]: time="2025-01-13T20:25:23.060074646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:23.084514 systemd[1]: Started cri-containerd-b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6.scope - libcontainer container b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6. Jan 13 20:25:23.118246 containerd[1486]: time="2025-01-13T20:25:23.118197614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqjjm,Uid:dfbb4c01-53b0-4c5d-9fad-b63e7983bc3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\"" Jan 13 20:25:23.127070 containerd[1486]: time="2025-01-13T20:25:23.127001583Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:25:23.144460 containerd[1486]: time="2025-01-13T20:25:23.144250489Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca\"" Jan 13 20:25:23.145350 containerd[1486]: time="2025-01-13T20:25:23.145127874Z" level=info msg="StartContainer for \"e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca\"" Jan 13 20:25:23.178550 systemd[1]: Started cri-containerd-e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca.scope - libcontainer container e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca. Jan 13 20:25:23.216035 containerd[1486]: time="2025-01-13T20:25:23.215979624Z" level=info msg="StartContainer for \"e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca\" returns successfully" Jan 13 20:25:23.230156 systemd[1]: cri-containerd-e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca.scope: Deactivated successfully. Jan 13 20:25:23.269234 containerd[1486]: time="2025-01-13T20:25:23.269113436Z" level=info msg="shim disconnected" id=e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca namespace=k8s.io Jan 13 20:25:23.269553 containerd[1486]: time="2025-01-13T20:25:23.269309273Z" level=warning msg="cleaning up after shim disconnected" id=e974deb45ba5b52931792f00ae91db058a6c3d2194d2c1af6aaec024426fffca namespace=k8s.io Jan 13 20:25:23.269553 containerd[1486]: time="2025-01-13T20:25:23.269323713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:23.838332 containerd[1486]: time="2025-01-13T20:25:23.838240838Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:25:23.872727 containerd[1486]: time="2025-01-13T20:25:23.872645290Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b\"" Jan 13 20:25:23.874984 containerd[1486]: time="2025-01-13T20:25:23.873770111Z" level=info msg="StartContainer for \"39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b\"" Jan 13 20:25:23.904594 systemd[1]: run-containerd-runc-k8s.io-39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b-runc.M18rAx.mount: Deactivated successfully. Jan 13 20:25:23.914077 systemd[1]: Started cri-containerd-39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b.scope - libcontainer container 39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b. Jan 13 20:25:23.941767 containerd[1486]: time="2025-01-13T20:25:23.941722231Z" level=info msg="StartContainer for \"39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b\" returns successfully" Jan 13 20:25:23.949067 systemd[1]: cri-containerd-39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b.scope: Deactivated successfully. Jan 13 20:25:23.975572 containerd[1486]: time="2025-01-13T20:25:23.975311617Z" level=info msg="shim disconnected" id=39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b namespace=k8s.io Jan 13 20:25:23.975572 containerd[1486]: time="2025-01-13T20:25:23.975381536Z" level=warning msg="cleaning up after shim disconnected" id=39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b namespace=k8s.io Jan 13 20:25:23.975572 containerd[1486]: time="2025-01-13T20:25:23.975393336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:24.063977 sshd[4535]: Accepted publickey for core from 139.178.89.65 port 35456 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:25:24.064622 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:24.068824 systemd-logind[1466]: New session 23 of user core. Jan 13 20:25:24.084794 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:25:24.665945 kubelet[2746]: I0113 20:25:24.664250 2746 setters.go:600] "Node became not ready" node="ci-4186-1-0-7-7ab547e2a5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:25:24Z","lastTransitionTime":"2025-01-13T20:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:25:24.750287 sshd[4708]: Connection closed by 139.178.89.65 port 35456 Jan 13 20:25:24.751209 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:24.755486 systemd[1]: sshd@25-138.199.153.210:22-139.178.89.65:35456.service: Deactivated successfully. Jan 13 20:25:24.758173 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:25:24.759768 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:25:24.760945 systemd-logind[1466]: Removed session 23. Jan 13 20:25:24.845732 containerd[1486]: time="2025-01-13T20:25:24.845684209Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:25:24.847650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39d3a6463ffec6ef534d1632b5527164d99a928bdbe4fa72adef1773790cc00b-rootfs.mount: Deactivated successfully. Jan 13 20:25:24.876672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818448416.mount: Deactivated successfully. Jan 13 20:25:24.878647 containerd[1486]: time="2025-01-13T20:25:24.878564700Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7\"" Jan 13 20:25:24.880027 containerd[1486]: time="2025-01-13T20:25:24.879537404Z" level=info msg="StartContainer for \"1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7\"" Jan 13 20:25:24.927486 systemd[1]: Started cri-containerd-1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7.scope - libcontainer container 1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7. Jan 13 20:25:24.930460 systemd[1]: Started sshd@26-138.199.153.210:22-139.178.89.65:35470.service - OpenSSH per-connection server daemon (139.178.89.65:35470). Jan 13 20:25:24.969805 containerd[1486]: time="2025-01-13T20:25:24.969678541Z" level=info msg="StartContainer for \"1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7\" returns successfully" Jan 13 20:25:24.974401 systemd[1]: cri-containerd-1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7.scope: Deactivated successfully. Jan 13 20:25:24.999947 containerd[1486]: time="2025-01-13T20:25:24.999735639Z" level=info msg="shim disconnected" id=1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7 namespace=k8s.io Jan 13 20:25:24.999947 containerd[1486]: time="2025-01-13T20:25:24.999832518Z" level=warning msg="cleaning up after shim disconnected" id=1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7 namespace=k8s.io Jan 13 20:25:24.999947 containerd[1486]: time="2025-01-13T20:25:24.999863477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:25.849927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a1eeddd93a5e1c2cf065f1fb9a3d43ae9428d08ef829a102c7fd3828d4acac7-rootfs.mount: Deactivated successfully. Jan 13 20:25:25.856581 containerd[1486]: time="2025-01-13T20:25:25.856165010Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:25:25.878588 containerd[1486]: time="2025-01-13T20:25:25.878535646Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2\"" Jan 13 20:25:25.882298 containerd[1486]: time="2025-01-13T20:25:25.880789529Z" level=info msg="StartContainer for \"91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2\"" Jan 13 20:25:25.914459 systemd[1]: Started cri-containerd-91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2.scope - libcontainer container 91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2. Jan 13 20:25:25.928942 sshd[4730]: Accepted publickey for core from 139.178.89.65 port 35470 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:25:25.930174 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:25.939484 systemd-logind[1466]: New session 24 of user core. Jan 13 20:25:25.944526 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:25:25.944754 systemd[1]: cri-containerd-91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2.scope: Deactivated successfully. Jan 13 20:25:25.950979 containerd[1486]: time="2025-01-13T20:25:25.950920427Z" level=info msg="StartContainer for \"91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2\" returns successfully" Jan 13 20:25:25.953362 containerd[1486]: time="2025-01-13T20:25:25.952170646Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbb4c01_53b0_4c5d_9fad_b63e7983bc3f.slice/cri-containerd-91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2.scope/memory.events\": no such file or directory" Jan 13 20:25:25.972990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2-rootfs.mount: Deactivated successfully. Jan 13 20:25:25.981241 containerd[1486]: time="2025-01-13T20:25:25.981123575Z" level=info msg="shim disconnected" id=91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2 namespace=k8s.io Jan 13 20:25:25.981241 containerd[1486]: time="2025-01-13T20:25:25.981321771Z" level=warning msg="cleaning up after shim disconnected" id=91090a96b0fc86e38d0000a4c537aad749f8d154eb52ebb1b16ed1d7c793efa2 namespace=k8s.io Jan 13 20:25:25.981241 containerd[1486]: time="2025-01-13T20:25:25.981342971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:25.993533 containerd[1486]: time="2025-01-13T20:25:25.993487413Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:25:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:25:26.859763 containerd[1486]: time="2025-01-13T20:25:26.859585722Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:25:26.891442 containerd[1486]: time="2025-01-13T20:25:26.890975703Z" level=info msg="CreateContainer within sandbox \"b1f72c9d0308de4607e70118d2348b6cc9cc4865748199b999ddb877a71c3dc6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1\"" Jan 13 20:25:26.893288 containerd[1486]: time="2025-01-13T20:25:26.892994871Z" level=info msg="StartContainer for \"24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1\"" Jan 13 20:25:26.928991 systemd[1]: run-containerd-runc-k8s.io-24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1-runc.gQCk89.mount: Deactivated successfully. Jan 13 20:25:26.937480 systemd[1]: Started cri-containerd-24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1.scope - libcontainer container 24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1. Jan 13 20:25:26.971297 containerd[1486]: time="2025-01-13T20:25:26.971039710Z" level=info msg="StartContainer for \"24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1\" returns successfully" Jan 13 20:25:27.317366 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:25:27.883795 containerd[1486]: time="2025-01-13T20:25:27.883730182Z" level=info msg="StopPodSandbox for \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\"" Jan 13 20:25:27.885060 containerd[1486]: time="2025-01-13T20:25:27.883830741Z" level=info msg="TearDown network for sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" successfully" Jan 13 20:25:27.885060 containerd[1486]: time="2025-01-13T20:25:27.883841981Z" level=info msg="StopPodSandbox for \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" returns successfully" Jan 13 20:25:27.885395 containerd[1486]: time="2025-01-13T20:25:27.885344877Z" level=info msg="RemovePodSandbox for \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\"" Jan 13 20:25:27.885455 containerd[1486]: time="2025-01-13T20:25:27.885400996Z" level=info msg="Forcibly stopping sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\"" Jan 13 20:25:27.886276 containerd[1486]: time="2025-01-13T20:25:27.885474475Z" level=info msg="TearDown network for sandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" successfully" Jan 13 20:25:27.889070 containerd[1486]: time="2025-01-13T20:25:27.889004941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:27.889070 containerd[1486]: time="2025-01-13T20:25:27.889074180Z" level=info msg="RemovePodSandbox \"2d1b39c5e338e04ba95d74b82bf64615a1b2217cec3f94cae9c3f22f4d59c7c3\" returns successfully" Jan 13 20:25:27.889740 containerd[1486]: time="2025-01-13T20:25:27.889709730Z" level=info msg="StopPodSandbox for \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\"" Jan 13 20:25:27.889806 containerd[1486]: time="2025-01-13T20:25:27.889791968Z" level=info msg="TearDown network for sandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" successfully" Jan 13 20:25:27.889806 containerd[1486]: time="2025-01-13T20:25:27.889803448Z" level=info msg="StopPodSandbox for \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" returns successfully" Jan 13 20:25:27.890491 containerd[1486]: time="2025-01-13T20:25:27.890445038Z" level=info msg="RemovePodSandbox for \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\"" Jan 13 20:25:27.890491 containerd[1486]: time="2025-01-13T20:25:27.890474558Z" level=info msg="Forcibly stopping sandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\"" Jan 13 20:25:27.890590 containerd[1486]: time="2025-01-13T20:25:27.890521157Z" level=info msg="TearDown network for sandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" successfully" Jan 13 20:25:27.895092 containerd[1486]: time="2025-01-13T20:25:27.895023847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:27.895399 containerd[1486]: time="2025-01-13T20:25:27.895104006Z" level=info msg="RemovePodSandbox \"592fa21196c99d5bc6827e728b061c7e57fd6daf6fa0ba75e81ecb4ace27b143\" returns successfully" Jan 13 20:25:28.638754 systemd[1]: run-containerd-runc-k8s.io-24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1-runc.w3klJZ.mount: Deactivated successfully. Jan 13 20:25:30.316079 systemd-networkd[1374]: lxc_health: Link UP Jan 13 20:25:30.360720 systemd-networkd[1374]: lxc_health: Gained carrier Jan 13 20:25:31.064153 kubelet[2746]: I0113 20:25:31.063168 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mqjjm" podStartSLOduration=9.063150015 podStartE2EDuration="9.063150015s" podCreationTimestamp="2025-01-13 20:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:25:27.899542177 +0000 UTC m=+360.173627323" watchObservedRunningTime="2025-01-13 20:25:31.063150015 +0000 UTC m=+363.337235121" Jan 13 20:25:32.088613 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 13 20:25:35.134636 systemd[1]: run-containerd-runc-k8s.io-24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1-runc.4u3hcc.mount: Deactivated successfully. Jan 13 20:25:37.275802 systemd[1]: run-containerd-runc-k8s.io-24cb8f4bcaf9086147474c3b2a97801a9756dbb44b40536c80589fa3e2446fc1-runc.dW4tBb.mount: Deactivated successfully. Jan 13 20:25:37.483455 sshd[4800]: Connection closed by 139.178.89.65 port 35470 Jan 13 20:25:37.484538 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:37.488539 systemd[1]: sshd@26-138.199.153.210:22-139.178.89.65:35470.service: Deactivated successfully. Jan 13 20:25:37.491715 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:25:37.494970 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:25:37.496121 systemd-logind[1466]: Removed session 24.