Jan 13 20:16:48.895713 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:48.895738 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:16:48.895749 kernel: KASLR enabled Jan 13 20:16:48.895755 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:48.895760 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4f698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132303d98 Jan 13 20:16:48.895766 kernel: random: crng init done Jan 13 20:16:48.895773 kernel: secureboot: Secure boot disabled Jan 13 20:16:48.895779 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:48.895785 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:16:48.895791 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:48.895799 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895804 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895810 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895816 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895823 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895831 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895838 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895844 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895850 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:48.895856 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:16:48.895863 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:16:48.895869 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:48.895875 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:48.895881 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 13 20:16:48.895887 kernel: Zone ranges: Jan 13 20:16:48.895894 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:16:48.895901 kernel: DMA32 empty Jan 13 20:16:48.895907 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:16:48.895913 kernel: Movable zone start for each node Jan 13 20:16:48.895920 kernel: Early memory node ranges Jan 13 20:16:48.895926 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:16:48.895932 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:16:48.895939 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:16:48.895945 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:16:48.895951 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:16:48.895957 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:48.895963 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:16:48.895971 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:48.895977 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:48.895983 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:48.895992 kernel: psci: Trusted OS migration not required Jan 13 20:16:48.895999 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:48.896006 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:48.896014 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:48.896020 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:48.896027 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:16:48.896033 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:48.896040 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:48.896046 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:48.896053 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:48.896060 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:48.896066 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:48.896073 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:48.896080 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:48.896088 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:48.896094 kernel: alternatives: applying boot alternatives Jan 13 20:16:48.896102 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:48.896109 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:48.896115 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:48.896122 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:48.896129 kernel: Fallback order for Node 0: 0 Jan 13 20:16:48.896135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:16:48.896177 kernel: Policy zone: Normal Jan 13 20:16:48.896185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:48.896191 kernel: software IO TLB: area num 2. Jan 13 20:16:48.896201 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:16:48.897263 kernel: Memory: 3881336K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 214664K reserved, 0K cma-reserved) Jan 13 20:16:48.897273 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:16:48.897280 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:48.897288 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:48.897295 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:16:48.897302 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:48.897309 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:48.897316 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:48.897322 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:16:48.897329 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:48.897344 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:48.897352 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:48.897359 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:48.897366 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:48.897372 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:48.897379 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:48.897386 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:48.897393 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:48.897399 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:16:48.897406 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:16:48.897413 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:48.897421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:48.897428 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:48.897435 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:48.897442 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:48.897448 kernel: Console: colour dummy device 80x25 Jan 13 20:16:48.897456 kernel: ACPI: Core revision 20230628 Jan 13 20:16:48.897463 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:48.897470 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:48.897477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:48.897483 kernel: landlock: Up and running. Jan 13 20:16:48.897492 kernel: SELinux: Initializing. Jan 13 20:16:48.897499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:48.897506 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:48.897513 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:48.897520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:48.897527 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:48.897534 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:48.897541 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:48.897547 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:48.897556 kernel: Remapping and enabling EFI services. Jan 13 20:16:48.897563 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:48.897570 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:48.897577 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:48.897584 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:16:48.897591 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:48.897598 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:48.897605 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:16:48.897612 kernel: SMP: Total of 2 processors activated. Jan 13 20:16:48.897619 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:48.897627 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:48.897634 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:48.897646 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:48.897655 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:48.897662 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:48.897670 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:48.897677 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:48.897684 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:48.897692 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:48.897700 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:48.897708 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:48.897715 kernel: devtmpfs: initialized Jan 13 20:16:48.897722 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:48.897730 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:16:48.897737 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:48.897745 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:48.897754 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:16:48.897762 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:48.897769 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:48.897777 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:48.897784 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:48.897791 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:48.897799 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:48.897806 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:48.897813 kernel: cpuidle: using governor menu Jan 13 20:16:48.897822 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:48.897829 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:48.897836 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:48.897844 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:48.897853 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:48.897862 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:48.897870 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:16:48.897878 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:48.897886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:48.897895 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:48.897902 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:48.897909 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:48.897917 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:48.897924 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:48.897931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:48.897938 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:48.897946 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:48.897953 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:48.897962 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:48.897969 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:48.897976 kernel: ACPI: Interpreter enabled Jan 13 20:16:48.897984 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:48.897991 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:48.897998 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:48.898005 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:48.898013 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:48.898241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:48.898328 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:48.898394 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:48.898455 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:48.898517 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:48.898526 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:48.898534 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:48.898603 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:48.898662 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:48.898718 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:48.898774 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:48.898853 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:48.898927 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:16:48.898992 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:16:48.899059 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:48.899133 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.900325 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:16:48.900445 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.900537 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:16:48.900610 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.900682 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:16:48.900753 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.900817 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:16:48.901448 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.901540 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:16:48.901617 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.901690 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:16:48.901764 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.901830 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:16:48.901903 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.901969 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:16:48.902042 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:48.902107 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:16:48.902218 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:16:48.902293 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:16:48.902369 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:48.902438 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:16:48.902507 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:48.902575 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:48.902654 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:16:48.902724 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:16:48.904415 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:16:48.904514 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:16:48.904582 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:16:48.904656 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:16:48.904722 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:16:48.904822 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:16:48.904890 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:16:48.904966 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:16:48.905032 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:16:48.905097 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:48.905185 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:48.905990 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:16:48.906071 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:16:48.906135 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:48.907793 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:16:48.907875 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:48.907941 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:48.908016 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:16:48.908080 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:16:48.908159 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:16:48.908265 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:16:48.908331 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:48.908393 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:48.908460 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:16:48.908529 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:16:48.908602 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:16:48.908680 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:16:48.908754 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:16:48.908828 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:16:48.908903 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:16:48.908967 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:48.909029 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:48.909097 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:16:48.909263 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:48.909341 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:48.909409 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:16:48.909471 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:48.909532 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:48.909598 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:16:48.909661 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:48.909730 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:48.909794 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:16:48.909857 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:48.909922 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:16:48.909985 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:48.910049 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:16:48.910113 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:48.910192 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:16:48.910272 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:48.910339 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:16:48.910402 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:48.910465 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:16:48.910529 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:48.910595 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:16:48.910659 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:48.910723 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:16:48.910786 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:48.910850 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:16:48.910925 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:48.910996 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:16:48.911063 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:16:48.911129 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:16:48.911571 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:16:48.911675 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:16:48.911741 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:16:48.911806 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:16:48.911869 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:16:48.911934 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:16:48.912007 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:16:48.912072 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:16:48.912136 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:16:48.912374 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:16:48.912444 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:16:48.912507 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:16:48.912571 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:16:48.912634 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:16:48.912702 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:16:48.912766 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:16:48.912828 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:16:48.912894 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:16:48.912965 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:16:48.913033 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:48.913098 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:16:48.913180 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:16:48.913360 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:16:48.913427 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:16:48.913488 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:48.913558 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:16:48.913622 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:16:48.913690 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:16:48.913751 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:16:48.913812 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:48.913881 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:48.913946 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:16:48.914008 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:16:48.914069 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:16:48.914133 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:16:48.914232 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:48.914322 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:48.914386 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:16:48.914449 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:16:48.914511 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:16:48.914572 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:48.914644 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:16:48.914713 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:16:48.914776 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:16:48.914839 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:16:48.916365 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:48.916448 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:16:48.916514 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:16:48.916580 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:16:48.916642 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:16:48.916710 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:16:48.916773 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:48.916842 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:16:48.916906 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:16:48.916971 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:16:48.917035 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:16:48.917098 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:16:48.918249 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:16:48.918374 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:48.918448 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:16:48.918515 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:16:48.918580 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:16:48.918647 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:48.918724 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:16:48.918795 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:16:48.918863 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:16:48.918936 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:48.919007 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:48.919066 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:48.919124 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:48.919290 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:16:48.919366 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:16:48.919427 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:48.919500 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:16:48.919561 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:16:48.919619 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:48.919685 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:16:48.919746 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:16:48.919804 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:48.919873 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:16:48.919933 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:16:48.919992 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:48.920070 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:16:48.920132 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:16:48.922300 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:48.922411 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:16:48.922481 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:16:48.922539 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:48.922607 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:16:48.922665 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:16:48.923407 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:48.923484 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:16:48.923545 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:16:48.923602 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:48.923674 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:16:48.923734 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:16:48.923795 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:48.923811 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:48.923819 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:48.923827 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:48.923835 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:48.923842 kernel: iommu: Default domain type: Translated Jan 13 20:16:48.923850 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:48.923858 kernel: efivars: Registered efivars operations Jan 13 20:16:48.923865 kernel: vgaarb: loaded Jan 13 20:16:48.923873 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:48.923882 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:48.923890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:48.923897 kernel: pnp: PnP ACPI init Jan 13 20:16:48.923968 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:48.923979 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:48.923987 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:48.923995 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:48.924003 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:48.924013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:48.924021 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:48.924029 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:48.924037 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:48.924045 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:48.924052 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:48.924060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:48.924137 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:16:48.924163 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:48.924174 kernel: kvm [1]: HYP mode not available Jan 13 20:16:48.924182 kernel: Initialise system trusted keyrings Jan 13 20:16:48.924189 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:48.924197 kernel: Key type asymmetric registered Jan 13 20:16:48.925623 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:48.925640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:48.925649 kernel: io scheduler mq-deadline registered Jan 13 20:16:48.925657 kernel: io scheduler kyber registered Jan 13 20:16:48.925665 kernel: io scheduler bfq registered Jan 13 20:16:48.925681 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:16:48.925817 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:16:48.925888 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:16:48.925953 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.926022 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:16:48.926086 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:16:48.926172 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.926262 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:16:48.926329 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:16:48.926395 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.926466 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:16:48.926529 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:16:48.926596 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.926664 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:16:48.926728 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:16:48.926790 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.926859 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:16:48.926924 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:16:48.926990 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.927057 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:16:48.927122 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:16:48.927201 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.927287 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:16:48.927352 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:16:48.927420 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.927431 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:16:48.927497 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:16:48.927562 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:16:48.927625 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:48.927635 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:48.927643 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:48.927651 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:48.927725 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:16:48.927799 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:16:48.927870 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:16:48.927881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:48.927889 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:16:48.927955 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:16:48.927966 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:16:48.927974 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:48.927985 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:48.927992 kernel: nicpf, ver 1.0 Jan 13 20:16:48.928000 kernel: nicvf, ver 1.0 Jan 13 20:16:48.928088 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:48.928196 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:48 UTC (1736799408) Jan 13 20:16:48.928234 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:48.928242 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:48.928260 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:48.928271 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:48.928279 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:48.928287 kernel: Segment Routing with IPv6 Jan 13 20:16:48.928295 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:48.928303 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:48.928311 kernel: Key type dns_resolver registered Jan 13 20:16:48.928319 kernel: registered taskstats version 1 Jan 13 20:16:48.928327 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:48.928335 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:16:48.928344 kernel: Key type .fscrypt registered Jan 13 20:16:48.928352 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:48.928360 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:48.928367 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:48.928375 kernel: ima: No architecture policies found Jan 13 20:16:48.928383 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:48.928391 kernel: clk: Disabling unused clocks Jan 13 20:16:48.928398 kernel: Freeing unused kernel memory: 39680K Jan 13 20:16:48.928406 kernel: Run /init as init process Jan 13 20:16:48.928415 kernel: with arguments: Jan 13 20:16:48.928423 kernel: /init Jan 13 20:16:48.928430 kernel: with environment: Jan 13 20:16:48.928437 kernel: HOME=/ Jan 13 20:16:48.928445 kernel: TERM=linux Jan 13 20:16:48.928452 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:48.928462 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:48.928472 systemd[1]: Detected virtualization kvm. Jan 13 20:16:48.928482 systemd[1]: Detected architecture arm64. Jan 13 20:16:48.928490 systemd[1]: Running in initrd. Jan 13 20:16:48.928498 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:48.928505 systemd[1]: Hostname set to . Jan 13 20:16:48.928514 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:48.928522 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:48.928530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:48.928538 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:48.928548 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:48.928557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:48.928565 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:48.928574 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:48.928583 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:48.928592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:48.928601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:48.928610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:48.928618 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:48.928627 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:48.928635 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:48.928644 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:48.928652 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:48.928660 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:48.928668 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:48.928678 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:48.928686 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:48.928695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:48.928703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:48.928711 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:48.928719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:48.928727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:48.928735 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:48.928745 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:48.928753 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:48.928761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:48.928769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:48.928778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:48.928786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:48.928794 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:48.928831 systemd-journald[236]: Collecting audit messages is disabled. Jan 13 20:16:48.928853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:48.928863 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:48.928871 kernel: Bridge firewalling registered Jan 13 20:16:48.928879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:48.928888 systemd-journald[236]: Journal started Jan 13 20:16:48.928907 systemd-journald[236]: Runtime Journal (/run/log/journal/9a13a05e0be7497dae0ed06a231b861d) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:48.899773 systemd-modules-load[237]: Inserted module 'overlay' Jan 13 20:16:48.934012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:48.921198 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 13 20:16:48.939263 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:48.938648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:48.939513 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:48.948356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:48.950815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:48.953853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:48.955596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:48.976198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:48.978815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:48.981262 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:48.990709 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:48.995421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:49.008277 dracut-cmdline[272]: dracut-dracut-053 Jan 13 20:16:49.012223 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:49.034686 systemd-resolved[274]: Positive Trust Anchors: Jan 13 20:16:49.034761 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:49.034792 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:49.044894 systemd-resolved[274]: Defaulting to hostname 'linux'. Jan 13 20:16:49.046500 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:49.047183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:49.103234 kernel: SCSI subsystem initialized Jan 13 20:16:49.108255 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:49.116270 kernel: iscsi: registered transport (tcp) Jan 13 20:16:49.129512 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:49.129600 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:49.177200 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:49.187459 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:49.209273 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:49.209378 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:49.209401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:49.263276 kernel: raid6: neonx8 gen() 15676 MB/s Jan 13 20:16:49.280274 kernel: raid6: neonx4 gen() 15125 MB/s Jan 13 20:16:49.297265 kernel: raid6: neonx2 gen() 12930 MB/s Jan 13 20:16:49.314352 kernel: raid6: neonx1 gen() 10195 MB/s Jan 13 20:16:49.331275 kernel: raid6: int64x8 gen() 6927 MB/s Jan 13 20:16:49.348284 kernel: raid6: int64x4 gen() 7198 MB/s Jan 13 20:16:49.365270 kernel: raid6: int64x2 gen() 5989 MB/s Jan 13 20:16:49.382278 kernel: raid6: int64x1 gen() 4516 MB/s Jan 13 20:16:49.382357 kernel: raid6: using algorithm neonx8 gen() 15676 MB/s Jan 13 20:16:49.399298 kernel: raid6: .... xor() 10127 MB/s, rmw enabled Jan 13 20:16:49.399393 kernel: raid6: using neon recovery algorithm Jan 13 20:16:49.405263 kernel: xor: measuring software checksum speed Jan 13 20:16:49.405345 kernel: 8regs : 19750 MB/sec Jan 13 20:16:49.405370 kernel: 32regs : 19631 MB/sec Jan 13 20:16:49.405393 kernel: arm64_neon : 24609 MB/sec Jan 13 20:16:49.406242 kernel: xor: using function: arm64_neon (24609 MB/sec) Jan 13 20:16:49.457245 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:49.475255 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:49.481434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:49.510501 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 13 20:16:49.513984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:49.524388 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:49.541406 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 13 20:16:49.577498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:49.582424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:49.632285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:49.643436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:49.662692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:49.664824 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:49.666938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:49.669027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:49.677435 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:49.692810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:49.749626 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:16:49.758645 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:49.791508 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:49.791573 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:49.783828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:49.785457 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:49.786755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:49.787073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:49.788881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:49.799598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:49.806303 kernel: ACPI: bus type USB registered Jan 13 20:16:49.806361 kernel: usbcore: registered new interface driver usbfs Jan 13 20:16:49.807257 kernel: usbcore: registered new interface driver hub Jan 13 20:16:49.808342 kernel: usbcore: registered new device driver usb Jan 13 20:16:49.822645 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:16:49.826398 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:16:49.826583 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:16:49.826595 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:16:49.835409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:49.843457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:49.849451 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:16:49.860275 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:16:49.860418 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:16:49.860501 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:16:49.860616 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:49.865240 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:16:49.865379 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:16:49.865467 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:16:49.865545 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:49.865621 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:16:49.865697 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:16:49.865772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:49.865782 kernel: GPT:17805311 != 80003071 Jan 13 20:16:49.865791 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:49.865802 kernel: GPT:17805311 != 80003071 Jan 13 20:16:49.865811 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:49.865820 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:49.865829 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:16:49.865911 kernel: hub 1-0:1.0: USB hub found Jan 13 20:16:49.866013 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:16:49.866088 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:16:49.866193 kernel: hub 2-0:1.0: USB hub found Jan 13 20:16:49.866294 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:16:49.878262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:49.908107 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (515) Jan 13 20:16:49.910257 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (510) Jan 13 20:16:49.921641 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:16:49.926517 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:16:49.933910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:49.938826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:16:49.939541 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:16:49.950620 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:49.960426 disk-uuid[576]: Primary Header is updated. Jan 13 20:16:49.960426 disk-uuid[576]: Secondary Entries is updated. Jan 13 20:16:49.960426 disk-uuid[576]: Secondary Header is updated. Jan 13 20:16:49.965237 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:49.969244 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:50.104274 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:16:50.346288 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:16:50.480393 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:16:50.480464 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:16:50.481608 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:16:50.535459 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:16:50.535700 kernel: usbcore: registered new interface driver usbhid Jan 13 20:16:50.535712 kernel: usbhid: USB HID core driver Jan 13 20:16:50.976291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:50.977011 disk-uuid[577]: The operation has completed successfully. Jan 13 20:16:51.022425 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:51.023153 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:51.049614 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:51.053503 sh[592]: Success Jan 13 20:16:51.066492 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:51.131857 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:51.133601 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:51.141343 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:51.157390 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:16:51.157470 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:51.157494 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:51.158343 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:51.158408 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:51.167246 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:16:51.169714 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:51.171399 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:51.176522 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:51.181006 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:51.190332 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:51.190392 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:51.190403 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:51.194239 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:51.194292 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:51.207414 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:51.209248 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:51.216698 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:51.224486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:51.318684 ignition[673]: Ignition 2.20.0 Jan 13 20:16:51.318699 ignition[673]: Stage: fetch-offline Jan 13 20:16:51.318735 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:51.320729 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:51.318743 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:51.318895 ignition[673]: parsed url from cmdline: "" Jan 13 20:16:51.318899 ignition[673]: no config URL provided Jan 13 20:16:51.318904 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:51.318911 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:51.318915 ignition[673]: failed to fetch config: resource requires networking Jan 13 20:16:51.319100 ignition[673]: Ignition finished successfully Jan 13 20:16:51.329678 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:51.335413 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:51.368790 systemd-networkd[780]: lo: Link UP Jan 13 20:16:51.369438 systemd-networkd[780]: lo: Gained carrier Jan 13 20:16:51.371772 systemd-networkd[780]: Enumeration completed Jan 13 20:16:51.372398 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:51.372692 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.372695 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:51.373474 systemd[1]: Reached target network.target - Network. Jan 13 20:16:51.374658 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.374662 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:51.375540 systemd-networkd[780]: eth0: Link UP Jan 13 20:16:51.375544 systemd-networkd[780]: eth0: Gained carrier Jan 13 20:16:51.375552 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.379391 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:16:51.379848 systemd-networkd[780]: eth1: Link UP Jan 13 20:16:51.379851 systemd-networkd[780]: eth1: Gained carrier Jan 13 20:16:51.379860 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.395958 ignition[783]: Ignition 2.20.0 Jan 13 20:16:51.395981 ignition[783]: Stage: fetch Jan 13 20:16:51.396195 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:51.396244 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:51.396355 ignition[783]: parsed url from cmdline: "" Jan 13 20:16:51.396358 ignition[783]: no config URL provided Jan 13 20:16:51.396363 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:51.396371 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:51.396465 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:16:51.397188 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:16:51.416308 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:51.435312 systemd-networkd[780]: eth0: DHCPv4 address 138.199.153.206/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:51.598273 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:16:51.605113 ignition[783]: GET result: OK Jan 13 20:16:51.605401 ignition[783]: parsing config with SHA512: f7a79460e1821b91811cbd09a97625262ec4e1467752712b2427fa92a06dc33f699e1535802309f60621101d366f08983faf387b3d0d2b935becede0cd8be20e Jan 13 20:16:51.612915 unknown[783]: fetched base config from "system" Jan 13 20:16:51.612930 unknown[783]: fetched base config from "system" Jan 13 20:16:51.613336 ignition[783]: fetch: fetch complete Jan 13 20:16:51.612936 unknown[783]: fetched user config from "hetzner" Jan 13 20:16:51.613342 ignition[783]: fetch: fetch passed Jan 13 20:16:51.616290 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:16:51.613388 ignition[783]: Ignition finished successfully Jan 13 20:16:51.621514 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:51.635916 ignition[790]: Ignition 2.20.0 Jan 13 20:16:51.635927 ignition[790]: Stage: kargs Jan 13 20:16:51.636102 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:51.636112 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:51.638333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:51.637137 ignition[790]: kargs: kargs passed Jan 13 20:16:51.637190 ignition[790]: Ignition finished successfully Jan 13 20:16:51.645437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:51.656404 ignition[796]: Ignition 2.20.0 Jan 13 20:16:51.656414 ignition[796]: Stage: disks Jan 13 20:16:51.656606 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:51.656621 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:51.659565 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:51.657611 ignition[796]: disks: disks passed Jan 13 20:16:51.660690 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:51.657668 ignition[796]: Ignition finished successfully Jan 13 20:16:51.661364 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:51.662481 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:51.663443 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:51.664743 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:51.670531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:51.689693 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:16:51.695784 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:51.703595 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:51.753245 kernel: EXT4-fs (sda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:51.753814 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:51.755521 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:51.764399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:51.767420 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:51.773467 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:16:51.774308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:51.774346 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:51.780619 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:51.785333 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (813) Jan 13 20:16:51.785612 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:51.790654 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:51.790681 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:51.792544 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:51.799318 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:51.799378 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:51.806106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:51.850924 coreos-metadata[815]: Jan 13 20:16:51.850 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:16:51.852911 coreos-metadata[815]: Jan 13 20:16:51.852 INFO Fetch successful Jan 13 20:16:51.856526 coreos-metadata[815]: Jan 13 20:16:51.855 INFO wrote hostname ci-4152-2-0-d-1c931fd560 to /sysroot/etc/hostname Jan 13 20:16:51.857512 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:51.857924 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:51.865007 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:51.871856 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:51.876309 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:51.970786 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:51.976417 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:51.978430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:51.990255 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:52.011551 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:52.019798 ignition[931]: INFO : Ignition 2.20.0 Jan 13 20:16:52.019798 ignition[931]: INFO : Stage: mount Jan 13 20:16:52.024305 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.024305 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.024305 ignition[931]: INFO : mount: mount passed Jan 13 20:16:52.024305 ignition[931]: INFO : Ignition finished successfully Jan 13 20:16:52.022899 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:52.033472 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:52.158264 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:52.165561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:52.178383 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Jan 13 20:16:52.180361 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:52.180420 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.180431 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:52.185264 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:52.185354 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:52.189279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:52.212443 ignition[960]: INFO : Ignition 2.20.0 Jan 13 20:16:52.214299 ignition[960]: INFO : Stage: files Jan 13 20:16:52.214299 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.214299 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.216592 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:52.217739 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:52.218496 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:52.221955 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:52.222961 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:52.224249 unknown[960]: wrote ssh authorized keys file for user: core Jan 13 20:16:52.225363 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:52.227583 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:52.228684 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:52.284723 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:16:52.571168 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:52.572375 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:52.572375 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:52.972427 systemd-networkd[780]: eth0: Gained IPv6LL Jan 13 20:16:53.265658 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:16:53.356478 systemd-networkd[780]: eth1: Gained IPv6LL Jan 13 20:16:53.602961 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:53.602961 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:53.605353 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 20:16:54.191057 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:16:55.187925 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:55.187925 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:55.193013 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:55.193013 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:55.193013 ignition[960]: INFO : files: files passed Jan 13 20:16:55.193013 ignition[960]: INFO : Ignition finished successfully Jan 13 20:16:55.194430 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:55.203947 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:55.207565 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:55.208965 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:55.209163 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:55.226387 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:55.227908 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:55.229665 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:55.230505 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:55.231613 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:55.235404 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:55.278602 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:55.278750 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:55.280986 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:55.282888 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:55.284800 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:55.291478 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:55.308004 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:55.315484 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:55.324817 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:55.326189 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:55.326867 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:55.327864 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:55.327990 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:55.329432 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:55.330669 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:55.331711 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:55.332835 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:55.333913 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:55.334964 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:55.336009 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:55.337062 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:55.338166 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:55.339080 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:55.339907 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:55.340029 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:55.341276 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:55.341890 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:55.342922 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:55.342996 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:55.344027 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:55.344159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:55.345589 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:55.345705 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:55.346969 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:55.347058 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:55.347903 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:16:55.347993 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:55.357559 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:55.362554 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:55.363073 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:55.363227 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:55.364130 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:55.368173 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:55.377483 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:55.377611 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:55.383238 ignition[1013]: INFO : Ignition 2.20.0 Jan 13 20:16:55.383238 ignition[1013]: INFO : Stage: umount Jan 13 20:16:55.383238 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:55.383238 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:55.383238 ignition[1013]: INFO : umount: umount passed Jan 13 20:16:55.383238 ignition[1013]: INFO : Ignition finished successfully Jan 13 20:16:55.387777 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:55.388548 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:55.390308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:55.391763 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:55.391861 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:55.393116 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:55.393235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:55.394527 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:55.394575 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:55.395086 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:16:55.395158 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:16:55.395957 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:55.396827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:55.396881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:55.397841 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:55.398633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:55.399594 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:55.400334 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:55.401160 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:55.402062 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:55.402128 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:55.403004 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:55.403040 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:55.403971 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:55.404024 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:55.404785 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:55.404823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:55.405792 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:55.405835 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:55.407306 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:55.408592 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:55.413276 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 13 20:16:55.417193 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:55.417376 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:55.417430 systemd-networkd[780]: eth1: DHCPv6 lease lost Jan 13 20:16:55.420277 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:55.420437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:55.422578 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:55.422636 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:55.428398 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:55.429201 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:55.429306 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:55.430422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:55.430487 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:55.431457 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:55.431516 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:55.434178 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:55.434297 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:55.435308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:55.448435 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:55.449175 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:55.458581 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:55.458931 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:55.462293 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:55.462391 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:55.464437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:55.464512 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:55.465780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:55.465825 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:55.467378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:55.467424 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:55.468813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:55.468857 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:55.477709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:55.478421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:55.478494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:55.479282 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:16:55.479329 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:55.480712 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:55.480760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:55.482142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:55.482202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:55.488425 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:55.488574 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:55.490039 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:55.495428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:55.506370 systemd[1]: Switching root. Jan 13 20:16:55.546972 systemd-journald[236]: Journal stopped Jan 13 20:16:56.472295 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:56.472356 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:56.472368 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:56.472382 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:56.472392 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:56.472401 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:56.472411 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:56.472420 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:56.472433 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:56.472443 kernel: audit: type=1403 audit(1736799415.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:56.472457 systemd[1]: Successfully loaded SELinux policy in 34.955ms. Jan 13 20:16:56.472476 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.988ms. Jan 13 20:16:56.472487 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:56.472497 systemd[1]: Detected virtualization kvm. Jan 13 20:16:56.472508 systemd[1]: Detected architecture arm64. Jan 13 20:16:56.472518 systemd[1]: Detected first boot. Jan 13 20:16:56.472529 systemd[1]: Hostname set to . Jan 13 20:16:56.472540 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:56.472550 zram_generator::config[1055]: No configuration found. Jan 13 20:16:56.472564 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:56.472578 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:16:56.472588 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:16:56.472598 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:56.472609 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:56.472621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:56.472632 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:56.472642 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:56.472652 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:56.472666 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:56.472676 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:56.472686 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:56.472697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:56.472708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:56.472720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:56.472730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:56.472740 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:56.472751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:56.472762 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:56.472773 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:56.472783 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:16:56.472798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:16:56.472808 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:56.472822 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:56.472834 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:56.472851 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:56.472861 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:56.472872 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:56.472882 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:56.472896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:56.472906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:56.472917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:56.472928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:56.472938 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:56.472948 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:56.472958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:56.472969 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:56.472979 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:56.472991 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:56.473002 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:56.473017 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:56.473030 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:56.473046 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:56.473056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:56.473068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:56.473079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:56.473089 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:56.473140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:56.473152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:56.473162 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:56.473172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:56.473183 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:56.473195 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:16:56.473226 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:16:56.473238 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:16:56.473248 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:16:56.473258 kernel: loop: module loaded Jan 13 20:16:56.473268 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:56.473278 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:56.473288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:56.473314 kernel: fuse: init (API version 7.39) Jan 13 20:16:56.473330 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:56.473365 systemd-journald[1118]: Collecting audit messages is disabled. Jan 13 20:16:56.473389 systemd-journald[1118]: Journal started Jan 13 20:16:56.473410 systemd-journald[1118]: Runtime Journal (/run/log/journal/9a13a05e0be7497dae0ed06a231b861d) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:56.253039 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:56.271945 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:16:56.272709 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:16:56.480293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:56.487179 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:16:56.487334 systemd[1]: Stopped verity-setup.service. Jan 13 20:16:56.487352 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:56.489036 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:56.491525 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:56.493301 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:56.495423 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:56.496133 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:56.496877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:56.498370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:56.501583 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:56.501715 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:56.503724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:56.503885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:56.507289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:56.508383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:56.509637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:56.509773 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:56.511723 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:56.511845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:56.512880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:56.514559 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:56.517759 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:56.516586 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:56.519760 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:56.519945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:56.534068 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:56.540472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:56.547269 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:56.547884 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:56.547927 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:56.551426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:56.559726 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:56.567454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:56.570567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:56.572955 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:56.577445 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:56.578821 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:56.579949 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:56.581437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:56.585422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:56.588697 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:56.594431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:56.597828 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:56.599145 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:56.601478 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:56.602435 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:56.626405 systemd-journald[1118]: Time spent on flushing to /var/log/journal/9a13a05e0be7497dae0ed06a231b861d is 62.624ms for 1130 entries. Jan 13 20:16:56.626405 systemd-journald[1118]: System Journal (/var/log/journal/9a13a05e0be7497dae0ed06a231b861d) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:16:56.708749 systemd-journald[1118]: Received client request to flush runtime journal. Jan 13 20:16:56.708812 kernel: loop0: detected capacity change from 0 to 194096 Jan 13 20:16:56.708835 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:56.647811 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:56.648973 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:56.662523 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:56.690121 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 13 20:16:56.690134 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 13 20:16:56.703145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:56.712581 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:56.714281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:56.717702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:56.725648 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:56.728880 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:56.733311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:56.739386 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:56.740303 kernel: loop1: detected capacity change from 0 to 116808 Jan 13 20:16:56.761555 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:16:56.781492 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:56.787269 kernel: loop2: detected capacity change from 0 to 113536 Jan 13 20:16:56.789382 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:56.804158 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 20:16:56.804177 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 20:16:56.811267 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:56.828822 kernel: loop3: detected capacity change from 0 to 8 Jan 13 20:16:56.848273 kernel: loop4: detected capacity change from 0 to 194096 Jan 13 20:16:56.870640 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:16:56.893743 kernel: loop6: detected capacity change from 0 to 113536 Jan 13 20:16:56.908459 kernel: loop7: detected capacity change from 0 to 8 Jan 13 20:16:56.909398 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:16:56.909855 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 13 20:16:56.916679 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:56.916707 systemd[1]: Reloading... Jan 13 20:16:57.046702 zram_generator::config[1223]: No configuration found. Jan 13 20:16:57.223430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:57.235928 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:57.278275 systemd[1]: Reloading finished in 361 ms. Jan 13 20:16:57.306989 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:57.310845 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:57.325516 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:57.329152 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:57.345357 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:57.345376 systemd[1]: Reloading... Jan 13 20:16:57.360123 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:57.360754 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:57.361582 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:57.361896 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 13 20:16:57.362009 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 13 20:16:57.366487 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:57.366629 systemd-tmpfiles[1262]: Skipping /boot Jan 13 20:16:57.374341 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:57.374491 systemd-tmpfiles[1262]: Skipping /boot Jan 13 20:16:57.431259 zram_generator::config[1285]: No configuration found. Jan 13 20:16:57.546894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:57.594295 systemd[1]: Reloading finished in 248 ms. Jan 13 20:16:57.617410 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:57.624062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:57.636450 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:57.646683 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:57.650613 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:57.656305 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:57.663487 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:57.666567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:57.675657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.685566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:57.692483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:57.696541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:57.697426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:57.701900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.702072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:57.704483 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:57.706939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:57.707135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:57.710071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.713826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:57.718550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:57.720472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:57.723380 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:57.725674 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:57.747712 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:16:57.754419 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:57.756391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:57.758052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:57.759025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:57.760783 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:57.761521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:57.762878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:57.764310 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:57.765411 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:57.765562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:57.776189 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:57.776302 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:57.777313 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:57.783578 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 13 20:16:57.811964 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:57.814040 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:57.819229 augenrules[1372]: No rules Jan 13 20:16:57.818507 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:57.819440 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:57.830010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:57.842422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:57.844398 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:57.942584 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:16:57.943433 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:16:57.955067 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:16:57.994167 systemd-networkd[1386]: lo: Link UP Jan 13 20:16:57.994199 systemd-networkd[1386]: lo: Gained carrier Jan 13 20:16:57.999032 systemd-networkd[1386]: Enumeration completed Jan 13 20:16:57.999248 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:58.000312 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.000316 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:58.001966 systemd-networkd[1386]: eth0: Link UP Jan 13 20:16:58.001980 systemd-networkd[1386]: eth0: Gained carrier Jan 13 20:16:58.001997 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.006942 systemd-resolved[1331]: Positive Trust Anchors: Jan 13 20:16:58.007435 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:58.008378 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:58.008414 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:58.014791 systemd-resolved[1331]: Using system hostname 'ci-4152-2-0-d-1c931fd560'. Jan 13 20:16:58.016755 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:58.017721 systemd[1]: Reached target network.target - Network. Jan 13 20:16:58.018820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:58.036257 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.047297 systemd-networkd[1386]: eth0: DHCPv4 address 138.199.153.206/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:58.048242 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 13 20:16:58.085364 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.085375 systemd-networkd[1386]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:58.087365 systemd-networkd[1386]: eth1: Link UP Jan 13 20:16:58.087376 systemd-networkd[1386]: eth1: Gained carrier Jan 13 20:16:58.087396 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.087639 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 13 20:16:58.092368 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 13 20:16:58.097647 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:16:58.120492 systemd-networkd[1386]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:58.121868 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 13 20:16:58.127659 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 13 20:16:58.127769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:58.136615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:58.142622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:58.149512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:58.150265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:58.150299 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:58.150646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:58.150779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:58.161233 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1399) Jan 13 20:16:58.164715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:58.164862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:58.166322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:58.185036 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:58.185361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:58.199896 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:58.224279 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:16:58.225500 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:16:58.225569 kernel: [drm] features: -context_init Jan 13 20:16:58.235234 kernel: [drm] number of scanouts: 1 Jan 13 20:16:58.235296 kernel: [drm] number of cap sets: 0 Jan 13 20:16:58.238294 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:16:58.248811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:58.253860 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:16:58.261330 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:16:58.267592 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:58.270816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:58.277232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:58.277413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:58.286777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:58.288864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:58.351595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:58.414331 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:58.421683 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:58.441060 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:58.471610 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:58.473933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:58.474957 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:58.475662 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:16:58.476384 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:16:58.477263 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:16:58.477935 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:16:58.478840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:16:58.479531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:16:58.479572 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:58.480029 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:58.481867 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:16:58.483992 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:16:58.489550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:16:58.492130 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:58.493507 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:16:58.494168 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:58.494735 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:58.495287 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:58.495325 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:58.498478 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:16:58.503394 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:58.506585 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:16:58.509508 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:16:58.528457 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:16:58.536364 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:16:58.538036 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:16:58.545501 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:16:58.549705 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:16:58.551857 coreos-metadata[1454]: Jan 13 20:16:58.551 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:16:58.552184 jq[1456]: false Jan 13 20:16:58.558385 coreos-metadata[1454]: Jan 13 20:16:58.553 INFO Fetch successful Jan 13 20:16:58.558385 coreos-metadata[1454]: Jan 13 20:16:58.553 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:16:58.558385 coreos-metadata[1454]: Jan 13 20:16:58.554 INFO Fetch successful Jan 13 20:16:58.555803 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:16:58.563455 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:16:58.569567 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:16:58.575042 extend-filesystems[1459]: Found loop4 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found loop5 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found loop6 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found loop7 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found sda Jan 13 20:16:58.575042 extend-filesystems[1459]: Found sda1 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found sda2 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found sda3 Jan 13 20:16:58.575042 extend-filesystems[1459]: Found usr Jan 13 20:16:58.575042 extend-filesystems[1459]: Found sda4 Jan 13 20:16:58.596024 extend-filesystems[1459]: Found sda6 Jan 13 20:16:58.596024 extend-filesystems[1459]: Found sda7 Jan 13 20:16:58.596024 extend-filesystems[1459]: Found sda9 Jan 13 20:16:58.596024 extend-filesystems[1459]: Checking size of /dev/sda9 Jan 13 20:16:58.583476 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:16:58.598166 dbus-daemon[1455]: [system] SELinux support is enabled Jan 13 20:16:58.586950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:16:58.588333 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:16:58.589978 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:16:58.597702 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:16:58.603030 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:16:58.610068 jq[1471]: true Jan 13 20:16:58.610260 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:58.623405 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:16:58.623610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:16:58.629808 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:16:58.631257 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:16:58.663352 extend-filesystems[1459]: Resized partition /dev/sda9 Jan 13 20:16:58.670473 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:16:58.673548 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:16:58.684343 jq[1480]: true Jan 13 20:16:58.675039 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:16:58.678578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:16:58.678615 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:16:58.680449 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:16:58.680475 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:16:58.692237 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:16:58.704691 update_engine[1469]: I20250113 20:16:58.704052 1469 main.cc:92] Flatcar Update Engine starting Jan 13 20:16:58.711495 update_engine[1469]: I20250113 20:16:58.709218 1469 update_check_scheduler.cc:74] Next update check in 6m10s Jan 13 20:16:58.709337 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:16:58.717443 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:16:58.720055 tar[1479]: linux-arm64/helm Jan 13 20:16:58.720883 systemd-logind[1466]: New seat seat0. Jan 13 20:16:58.721013 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:16:58.728243 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:16:58.728283 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:16:58.730255 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:16:58.756799 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:16:58.759811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:16:58.816303 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1395) Jan 13 20:16:58.826236 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:16:58.855853 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:16:58.855853 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:16:58.855853 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:16:58.859979 extend-filesystems[1459]: Resized filesystem in /dev/sda9 Jan 13 20:16:58.859979 extend-filesystems[1459]: Found sr0 Jan 13 20:16:58.858947 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:16:58.859439 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:16:58.871376 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:58.873658 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:16:58.889541 systemd[1]: Starting sshkeys.service... Jan 13 20:16:58.929835 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:16:58.940624 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:16:58.998007 coreos-metadata[1536]: Jan 13 20:16:58.997 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:16:58.998007 coreos-metadata[1536]: Jan 13 20:16:58.997 INFO Fetch successful Jan 13 20:16:59.005836 unknown[1536]: wrote ssh authorized keys file for user: core Jan 13 20:16:59.032834 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:59.038265 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:16:59.042965 systemd[1]: Finished sshkeys.service. Jan 13 20:16:59.051227 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:16:59.093435 containerd[1499]: time="2025-01-13T20:16:59.093333400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:16:59.166178 containerd[1499]: time="2025-01-13T20:16:59.166048520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.170286 containerd[1499]: time="2025-01-13T20:16:59.169949040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.170286 containerd[1499]: time="2025-01-13T20:16:59.170007920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:16:59.170286 containerd[1499]: time="2025-01-13T20:16:59.170029200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:16:59.172687 containerd[1499]: time="2025-01-13T20:16:59.172328880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:16:59.172687 containerd[1499]: time="2025-01-13T20:16:59.172373360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172687 containerd[1499]: time="2025-01-13T20:16:59.172456240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172687 containerd[1499]: time="2025-01-13T20:16:59.172470000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172687 containerd[1499]: time="2025-01-13T20:16:59.172667480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172687 containerd[1499]: time="2025-01-13T20:16:59.172691400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172858 containerd[1499]: time="2025-01-13T20:16:59.172705200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172858 containerd[1499]: time="2025-01-13T20:16:59.172714320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.172858 containerd[1499]: time="2025-01-13T20:16:59.172798920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.173057 containerd[1499]: time="2025-01-13T20:16:59.173030600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:59.173315 containerd[1499]: time="2025-01-13T20:16:59.173192040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:59.173315 containerd[1499]: time="2025-01-13T20:16:59.173247320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:16:59.173378 containerd[1499]: time="2025-01-13T20:16:59.173350360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:16:59.173614 containerd[1499]: time="2025-01-13T20:16:59.173397280Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:16:59.177768 containerd[1499]: time="2025-01-13T20:16:59.177731680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:16:59.177850 containerd[1499]: time="2025-01-13T20:16:59.177790640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:16:59.177850 containerd[1499]: time="2025-01-13T20:16:59.177815920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:16:59.177850 containerd[1499]: time="2025-01-13T20:16:59.177832440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:16:59.177850 containerd[1499]: time="2025-01-13T20:16:59.177847400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:16:59.178532 containerd[1499]: time="2025-01-13T20:16:59.178286440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:16:59.178623 containerd[1499]: time="2025-01-13T20:16:59.178602480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:16:59.178730 containerd[1499]: time="2025-01-13T20:16:59.178709120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:16:59.178757 containerd[1499]: time="2025-01-13T20:16:59.178730280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:16:59.178757 containerd[1499]: time="2025-01-13T20:16:59.178747640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:16:59.178792 containerd[1499]: time="2025-01-13T20:16:59.178761480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178792 containerd[1499]: time="2025-01-13T20:16:59.178780920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178832 containerd[1499]: time="2025-01-13T20:16:59.178793440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178832 containerd[1499]: time="2025-01-13T20:16:59.178807560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178832 containerd[1499]: time="2025-01-13T20:16:59.178821320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178886 containerd[1499]: time="2025-01-13T20:16:59.178840960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178886 containerd[1499]: time="2025-01-13T20:16:59.178853600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178886 containerd[1499]: time="2025-01-13T20:16:59.178864800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:16:59.178886 containerd[1499]: time="2025-01-13T20:16:59.178884600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.178951 containerd[1499]: time="2025-01-13T20:16:59.178898760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.178951 containerd[1499]: time="2025-01-13T20:16:59.178911120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.178951 containerd[1499]: time="2025-01-13T20:16:59.178923080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.178951 containerd[1499]: time="2025-01-13T20:16:59.178935480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.178951 containerd[1499]: time="2025-01-13T20:16:59.178948080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.178959400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.178972000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.178983560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.178996720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179007920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179019640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179031480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179051880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179072640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179100400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179137 containerd[1499]: time="2025-01-13T20:16:59.179111960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:16:59.179360 containerd[1499]: time="2025-01-13T20:16:59.179317560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:16:59.179360 containerd[1499]: time="2025-01-13T20:16:59.179337320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:16:59.179360 containerd[1499]: time="2025-01-13T20:16:59.179348080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:16:59.179420 containerd[1499]: time="2025-01-13T20:16:59.179359680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:16:59.179420 containerd[1499]: time="2025-01-13T20:16:59.179370000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.179420 containerd[1499]: time="2025-01-13T20:16:59.179382200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:16:59.179420 containerd[1499]: time="2025-01-13T20:16:59.179392000Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:16:59.179420 containerd[1499]: time="2025-01-13T20:16:59.179402920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:16:59.183369 containerd[1499]: time="2025-01-13T20:16:59.182400960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:16:59.183369 containerd[1499]: time="2025-01-13T20:16:59.182462440Z" level=info msg="Connect containerd service" Jan 13 20:16:59.183369 containerd[1499]: time="2025-01-13T20:16:59.182503920Z" level=info msg="using legacy CRI server" Jan 13 20:16:59.183369 containerd[1499]: time="2025-01-13T20:16:59.182510880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:16:59.183369 containerd[1499]: time="2025-01-13T20:16:59.182747800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.184525440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.184891040Z" level=info msg="Start subscribing containerd event" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.184948160Z" level=info msg="Start recovering state" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.185019320Z" level=info msg="Start event monitor" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.185032000Z" level=info msg="Start snapshots syncer" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.185041680Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:16:59.185382 containerd[1499]: time="2025-01-13T20:16:59.185050280Z" level=info msg="Start streaming server" Jan 13 20:16:59.189296 containerd[1499]: time="2025-01-13T20:16:59.188484520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:16:59.189296 containerd[1499]: time="2025-01-13T20:16:59.188581520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:16:59.188748 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:16:59.189767 containerd[1499]: time="2025-01-13T20:16:59.189647440Z" level=info msg="containerd successfully booted in 0.099932s" Jan 13 20:16:59.348037 tar[1479]: linux-arm64/LICENSE Jan 13 20:16:59.348442 tar[1479]: linux-arm64/README.md Jan 13 20:16:59.360257 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:16:59.820414 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 13 20:16:59.822107 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 13 20:16:59.825685 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:59.828307 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:59.837506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:59.842690 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:16:59.881927 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:16:59.967361 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:16:59.996345 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:17:00.003757 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:17:00.023890 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:17:00.024299 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:17:00.033724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:17:00.044145 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:17:00.051615 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:17:00.055437 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:17:00.056495 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:17:00.140483 systemd-networkd[1386]: eth1: Gained IPv6LL Jan 13 20:17:00.140863 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 13 20:17:00.625114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:00.627592 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:17:00.630358 systemd[1]: Startup finished in 779ms (kernel) + 7.056s (initrd) + 4.910s (userspace) = 12.746s. Jan 13 20:17:00.630592 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:01.241492 kubelet[1584]: E0113 20:17:01.241391 1584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:01.243662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:01.243975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:11.257457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:17:11.264702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:11.397557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:11.397622 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:11.456136 kubelet[1605]: E0113 20:17:11.456069 1605 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:11.459097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:11.459322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:21.507183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:21.516667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:21.639614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:21.639659 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:21.695117 kubelet[1621]: E0113 20:17:21.695070 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:21.697788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:21.697924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:30.478774 systemd-timesyncd[1351]: Contacted time server 144.91.126.59:123 (2.flatcar.pool.ntp.org). Jan 13 20:17:30.478878 systemd-timesyncd[1351]: Initial clock synchronization to Mon 2025-01-13 20:17:30.378229 UTC. Jan 13 20:17:31.757171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:17:31.766594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:31.875153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:31.880661 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:31.927137 kubelet[1638]: E0113 20:17:31.927003 1638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:31.930395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:31.930562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:42.007766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:17:42.015577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:42.123595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:42.135945 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:42.189614 kubelet[1654]: E0113 20:17:42.189547 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:42.192240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:42.192408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:43.885609 update_engine[1469]: I20250113 20:17:43.885448 1469 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:43.933262 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1672) Jan 13 20:17:52.257057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:17:52.272514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:52.408284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:52.422774 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:52.477925 kubelet[1686]: E0113 20:17:52.477857 1686 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:52.481494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:52.481758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:02.507428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:18:02.514785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:02.644155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:02.654759 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:02.708063 kubelet[1702]: E0113 20:18:02.708006 1702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:02.710811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:02.710964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:12.757104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:18:12.766071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:12.878079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:12.893001 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:12.940934 kubelet[1718]: E0113 20:18:12.940866 1718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:12.943527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:12.943656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:23.007176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:18:23.024634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:23.153540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:23.153635 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:23.206516 kubelet[1735]: E0113 20:18:23.206445 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:23.211714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:23.211952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:33.257293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:18:33.270572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:33.398543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:33.400878 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:33.448402 kubelet[1750]: E0113 20:18:33.448354 1750 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:33.450526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:33.450662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:43.507378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:18:43.516635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:43.640601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:43.640863 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:43.684822 kubelet[1765]: E0113 20:18:43.684778 1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:43.687761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:43.687918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:48.275735 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:18:48.284635 systemd[1]: Started sshd@0-138.199.153.206:22-147.75.109.163:41056.service - OpenSSH per-connection server daemon (147.75.109.163:41056). Jan 13 20:18:49.287665 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 41056 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:49.290421 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:49.302074 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:18:49.307764 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:18:49.311899 systemd-logind[1466]: New session 1 of user core. Jan 13 20:18:49.327161 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:18:49.333784 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:18:49.338768 (systemd)[1779]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:18:49.449835 systemd[1779]: Queued start job for default target default.target. Jan 13 20:18:49.459184 systemd[1779]: Created slice app.slice - User Application Slice. Jan 13 20:18:49.459307 systemd[1779]: Reached target paths.target - Paths. Jan 13 20:18:49.459338 systemd[1779]: Reached target timers.target - Timers. Jan 13 20:18:49.461448 systemd[1779]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:18:49.475050 systemd[1779]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:18:49.475406 systemd[1779]: Reached target sockets.target - Sockets. Jan 13 20:18:49.475525 systemd[1779]: Reached target basic.target - Basic System. Jan 13 20:18:49.475663 systemd[1779]: Reached target default.target - Main User Target. Jan 13 20:18:49.475764 systemd[1779]: Startup finished in 129ms. Jan 13 20:18:49.476144 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:18:49.484648 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:18:50.184672 systemd[1]: Started sshd@1-138.199.153.206:22-147.75.109.163:41072.service - OpenSSH per-connection server daemon (147.75.109.163:41072). Jan 13 20:18:51.163492 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 41072 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:51.165241 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:51.170777 systemd-logind[1466]: New session 2 of user core. Jan 13 20:18:51.177521 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:18:51.842399 sshd[1792]: Connection closed by 147.75.109.163 port 41072 Jan 13 20:18:51.841502 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:51.847484 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:18:51.847752 systemd[1]: sshd@1-138.199.153.206:22-147.75.109.163:41072.service: Deactivated successfully. Jan 13 20:18:51.849494 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:18:51.852248 systemd-logind[1466]: Removed session 2. Jan 13 20:18:52.017308 systemd[1]: Started sshd@2-138.199.153.206:22-147.75.109.163:41086.service - OpenSSH per-connection server daemon (147.75.109.163:41086). Jan 13 20:18:52.998324 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 41086 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:53.000619 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:53.008708 systemd-logind[1466]: New session 3 of user core. Jan 13 20:18:53.015775 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:18:53.674810 sshd[1799]: Connection closed by 147.75.109.163 port 41086 Jan 13 20:18:53.675587 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:53.679978 systemd[1]: sshd@2-138.199.153.206:22-147.75.109.163:41086.service: Deactivated successfully. Jan 13 20:18:53.683945 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:18:53.685426 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:18:53.686517 systemd-logind[1466]: Removed session 3. Jan 13 20:18:53.756995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:18:53.763450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:53.849668 systemd[1]: Started sshd@3-138.199.153.206:22-147.75.109.163:41100.service - OpenSSH per-connection server daemon (147.75.109.163:41100). Jan 13 20:18:53.907526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:53.917784 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:53.964785 kubelet[1814]: E0113 20:18:53.964648 1814 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:53.967107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:53.967272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:54.833800 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 41100 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:54.836018 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:54.843023 systemd-logind[1466]: New session 4 of user core. Jan 13 20:18:54.853555 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:18:55.516587 sshd[1822]: Connection closed by 147.75.109.163 port 41100 Jan 13 20:18:55.517450 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:55.520845 systemd[1]: sshd@3-138.199.153.206:22-147.75.109.163:41100.service: Deactivated successfully. Jan 13 20:18:55.524110 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:18:55.526112 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:18:55.528067 systemd-logind[1466]: Removed session 4. Jan 13 20:18:55.686391 systemd[1]: Started sshd@4-138.199.153.206:22-147.75.109.163:41112.service - OpenSSH per-connection server daemon (147.75.109.163:41112). Jan 13 20:18:56.699045 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 41112 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:56.701729 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:56.707640 systemd-logind[1466]: New session 5 of user core. Jan 13 20:18:56.717501 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:18:57.236241 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:18:57.236538 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:57.253957 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:57.415014 sshd[1829]: Connection closed by 147.75.109.163 port 41112 Jan 13 20:18:57.415876 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:57.420475 systemd[1]: sshd@4-138.199.153.206:22-147.75.109.163:41112.service: Deactivated successfully. Jan 13 20:18:57.421986 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:18:57.423331 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:18:57.425063 systemd-logind[1466]: Removed session 5. Jan 13 20:18:57.591671 systemd[1]: Started sshd@5-138.199.153.206:22-147.75.109.163:45602.service - OpenSSH per-connection server daemon (147.75.109.163:45602). Jan 13 20:18:58.598990 sshd[1835]: Accepted publickey for core from 147.75.109.163 port 45602 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:58.601107 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:58.606316 systemd-logind[1466]: New session 6 of user core. Jan 13 20:18:58.612870 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:18:59.128971 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:18:59.129297 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:59.132716 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:59.139611 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:18:59.139919 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:59.161865 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:18:59.194933 augenrules[1861]: No rules Jan 13 20:18:59.195956 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:18:59.196332 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:18:59.198543 sudo[1838]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:59.359913 sshd[1837]: Connection closed by 147.75.109.163 port 45602 Jan 13 20:18:59.360996 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:59.366663 systemd[1]: sshd@5-138.199.153.206:22-147.75.109.163:45602.service: Deactivated successfully. Jan 13 20:18:59.369983 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:18:59.372395 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:18:59.374162 systemd-logind[1466]: Removed session 6. Jan 13 20:18:59.541702 systemd[1]: Started sshd@6-138.199.153.206:22-147.75.109.163:45610.service - OpenSSH per-connection server daemon (147.75.109.163:45610). Jan 13 20:19:00.532597 sshd[1869]: Accepted publickey for core from 147.75.109.163 port 45610 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:19:00.534614 sshd-session[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:19:00.540940 systemd-logind[1466]: New session 7 of user core. Jan 13 20:19:00.549540 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:19:01.060149 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:19:01.060564 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:19:01.373037 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:19:01.373961 (dockerd)[1890]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:19:01.615551 dockerd[1890]: time="2025-01-13T20:19:01.615477916Z" level=info msg="Starting up" Jan 13 20:19:01.724516 dockerd[1890]: time="2025-01-13T20:19:01.724084730Z" level=info msg="Loading containers: start." Jan 13 20:19:01.908531 kernel: Initializing XFRM netlink socket Jan 13 20:19:01.998711 systemd-networkd[1386]: docker0: Link UP Jan 13 20:19:02.033935 dockerd[1890]: time="2025-01-13T20:19:02.033882855Z" level=info msg="Loading containers: done." Jan 13 20:19:02.047578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3761814794-merged.mount: Deactivated successfully. Jan 13 20:19:02.050281 dockerd[1890]: time="2025-01-13T20:19:02.050186341Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:19:02.050438 dockerd[1890]: time="2025-01-13T20:19:02.050329815Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:19:02.050486 dockerd[1890]: time="2025-01-13T20:19:02.050463890Z" level=info msg="Daemon has completed initialization" Jan 13 20:19:02.089688 dockerd[1890]: time="2025-01-13T20:19:02.089270261Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:19:02.089371 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:19:03.227428 containerd[1499]: time="2025-01-13T20:19:03.227385521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:19:03.909237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066823686.mount: Deactivated successfully. Jan 13 20:19:04.007835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:19:04.014697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:04.152363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:04.160586 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:04.209316 kubelet[2108]: E0113 20:19:04.209241 2108 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:04.212085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:04.212241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:04.806268 containerd[1499]: time="2025-01-13T20:19:04.806186542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.807840 containerd[1499]: time="2025-01-13T20:19:04.807776241Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864102" Jan 13 20:19:04.808597 containerd[1499]: time="2025-01-13T20:19:04.808498134Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.811936 containerd[1499]: time="2025-01-13T20:19:04.811860245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.816232 containerd[1499]: time="2025-01-13T20:19:04.815190118Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 1.58775792s" Jan 13 20:19:04.816232 containerd[1499]: time="2025-01-13T20:19:04.815268995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 13 20:19:04.846518 containerd[1499]: time="2025-01-13T20:19:04.846459845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:19:06.195906 containerd[1499]: time="2025-01-13T20:19:06.195820455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.197334 containerd[1499]: time="2025-01-13T20:19:06.197270281Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900714" Jan 13 20:19:06.199430 containerd[1499]: time="2025-01-13T20:19:06.199337083Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.203220 containerd[1499]: time="2025-01-13T20:19:06.203141300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.204948 containerd[1499]: time="2025-01-13T20:19:06.204546888Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.358023086s" Jan 13 20:19:06.204948 containerd[1499]: time="2025-01-13T20:19:06.204589886Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 13 20:19:06.232420 containerd[1499]: time="2025-01-13T20:19:06.232369683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:19:07.160435 containerd[1499]: time="2025-01-13T20:19:07.160342399Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164352" Jan 13 20:19:07.162228 containerd[1499]: time="2025-01-13T20:19:07.161801755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.165127 containerd[1499]: time="2025-01-13T20:19:07.165073168Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.166620 containerd[1499]: time="2025-01-13T20:19:07.166574682Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 934.154002ms" Jan 13 20:19:07.166620 containerd[1499]: time="2025-01-13T20:19:07.166615440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 13 20:19:07.167694 containerd[1499]: time="2025-01-13T20:19:07.167649221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.191252 containerd[1499]: time="2025-01-13T20:19:07.190939607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:19:08.131323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567192948.mount: Deactivated successfully. Jan 13 20:19:08.491872 containerd[1499]: time="2025-01-13T20:19:08.491709592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:08.493455 containerd[1499]: time="2025-01-13T20:19:08.493177509Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662037" Jan 13 20:19:08.494791 containerd[1499]: time="2025-01-13T20:19:08.494741180Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:08.498249 containerd[1499]: time="2025-01-13T20:19:08.497412389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:08.498417 containerd[1499]: time="2025-01-13T20:19:08.498197705Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.30721558s" Jan 13 20:19:08.498512 containerd[1499]: time="2025-01-13T20:19:08.498493648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 20:19:08.527633 containerd[1499]: time="2025-01-13T20:19:08.527596201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:19:09.124394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666791390.mount: Deactivated successfully. Jan 13 20:19:09.775627 containerd[1499]: time="2025-01-13T20:19:09.775557863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:09.777687 containerd[1499]: time="2025-01-13T20:19:09.777174413Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 13 20:19:09.778938 containerd[1499]: time="2025-01-13T20:19:09.778838400Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:09.783686 containerd[1499]: time="2025-01-13T20:19:09.783580975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:09.786436 containerd[1499]: time="2025-01-13T20:19:09.786001759Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.258203649s" Jan 13 20:19:09.786436 containerd[1499]: time="2025-01-13T20:19:09.786051557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:19:09.808835 containerd[1499]: time="2025-01-13T20:19:09.808749847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:19:10.423643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055684239.mount: Deactivated successfully. Jan 13 20:19:10.432179 containerd[1499]: time="2025-01-13T20:19:10.432076379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.434060 containerd[1499]: time="2025-01-13T20:19:10.433957515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 13 20:19:10.435322 containerd[1499]: time="2025-01-13T20:19:10.435262323Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.437754 containerd[1499]: time="2025-01-13T20:19:10.437695668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.438637 containerd[1499]: time="2025-01-13T20:19:10.438408429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 629.421915ms" Jan 13 20:19:10.438637 containerd[1499]: time="2025-01-13T20:19:10.438442987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:19:10.459894 containerd[1499]: time="2025-01-13T20:19:10.459789087Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:19:11.048919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247020689.mount: Deactivated successfully. Jan 13 20:19:12.481948 containerd[1499]: time="2025-01-13T20:19:12.481841595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:12.484657 containerd[1499]: time="2025-01-13T20:19:12.484580007Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 13 20:19:12.486768 containerd[1499]: time="2025-01-13T20:19:12.486658535Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:12.494788 containerd[1499]: time="2025-01-13T20:19:12.494692060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:12.496720 containerd[1499]: time="2025-01-13T20:19:12.496659794Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.036769393s" Jan 13 20:19:12.497144 containerd[1499]: time="2025-01-13T20:19:12.496929339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 13 20:19:14.257826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 13 20:19:14.266484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:14.393140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:14.403641 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:14.453931 kubelet[2353]: E0113 20:19:14.453890 2353 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:14.456071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:14.456195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:19.323835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:19.329620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:19.355177 systemd[1]: Reloading requested from client PID 2367 ('systemctl') (unit session-7.scope)... Jan 13 20:19:19.355194 systemd[1]: Reloading... Jan 13 20:19:19.481236 zram_generator::config[2413]: No configuration found. Jan 13 20:19:19.572574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:19.639962 systemd[1]: Reloading finished in 284 ms. Jan 13 20:19:19.700800 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:19:19.701107 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:19:19.701875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:19.707763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:19.833476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:19.833754 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:19.882417 kubelet[2455]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:19.882417 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:19.882417 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:19.882824 kubelet[2455]: I0113 20:19:19.882518 2455 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:20.868068 kubelet[2455]: I0113 20:19:20.867994 2455 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:19:20.868068 kubelet[2455]: I0113 20:19:20.868047 2455 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:20.868712 kubelet[2455]: I0113 20:19:20.868652 2455 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:19:20.889919 kubelet[2455]: I0113 20:19:20.889862 2455 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:20.891022 kubelet[2455]: E0113 20:19:20.890743 2455 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.153.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.900342 kubelet[2455]: I0113 20:19:20.900315 2455 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:20.902834 kubelet[2455]: I0113 20:19:20.901944 2455 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:20.902834 kubelet[2455]: I0113 20:19:20.901987 2455 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-d-1c931fd560","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:19:20.902834 kubelet[2455]: I0113 20:19:20.902261 2455 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:20.902834 kubelet[2455]: I0113 20:19:20.902272 2455 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:19:20.903065 kubelet[2455]: I0113 20:19:20.902536 2455 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:20.903748 kubelet[2455]: I0113 20:19:20.903729 2455 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:19:20.903839 kubelet[2455]: I0113 20:19:20.903829 2455 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:20.904187 kubelet[2455]: I0113 20:19:20.904178 2455 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:19:20.904398 kubelet[2455]: I0113 20:19:20.904386 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:20.905812 kubelet[2455]: I0113 20:19:20.905769 2455 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:20.906135 kubelet[2455]: I0113 20:19:20.906120 2455 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:20.906664 kubelet[2455]: W0113 20:19:20.906235 2455 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:19:20.908394 kubelet[2455]: I0113 20:19:20.908357 2455 server.go:1264] "Started kubelet" Jan 13 20:19:20.908914 kubelet[2455]: W0113 20:19:20.908846 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.206:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.908960 kubelet[2455]: E0113 20:19:20.908938 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.206:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.909139 kubelet[2455]: W0113 20:19:20.909090 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-d-1c931fd560&limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.909176 kubelet[2455]: E0113 20:19:20.909152 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-d-1c931fd560&limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.917512 kubelet[2455]: I0113 20:19:20.917461 2455 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:20.918231 kubelet[2455]: I0113 20:19:20.917787 2455 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:20.918284 kubelet[2455]: I0113 20:19:20.918237 2455 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:20.919051 kubelet[2455]: I0113 20:19:20.919030 2455 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:19:20.923114 kubelet[2455]: I0113 20:19:20.923092 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:20.925032 kubelet[2455]: E0113 20:19:20.924822 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.206:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-d-1c931fd560.181a59ffa9fea850 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-d-1c931fd560,UID:ci-4152-2-0-d-1c931fd560,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-d-1c931fd560,},FirstTimestamp:2025-01-13 20:19:20.9083188 +0000 UTC m=+1.070512037,LastTimestamp:2025-01-13 20:19:20.9083188 +0000 UTC m=+1.070512037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-d-1c931fd560,}" Jan 13 20:19:20.929274 kubelet[2455]: E0113 20:19:20.929160 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-d-1c931fd560\" not found" Jan 13 20:19:20.929382 kubelet[2455]: I0113 20:19:20.929303 2455 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:19:20.930522 kubelet[2455]: I0113 20:19:20.929787 2455 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:19:20.930522 kubelet[2455]: I0113 20:19:20.929860 2455 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:20.930522 kubelet[2455]: W0113 20:19:20.930301 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.930522 kubelet[2455]: E0113 20:19:20.930349 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.931542 kubelet[2455]: E0113 20:19:20.931182 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-d-1c931fd560?timeout=10s\": dial tcp 138.199.153.206:6443: connect: connection refused" interval="200ms" Jan 13 20:19:20.931920 kubelet[2455]: I0113 20:19:20.931896 2455 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:20.932338 kubelet[2455]: E0113 20:19:20.932311 2455 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:20.933460 kubelet[2455]: I0113 20:19:20.933437 2455 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:20.933561 kubelet[2455]: I0113 20:19:20.933551 2455 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:20.942485 kubelet[2455]: I0113 20:19:20.942390 2455 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:20.944708 kubelet[2455]: I0113 20:19:20.944640 2455 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:20.944819 kubelet[2455]: I0113 20:19:20.944811 2455 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:20.944847 kubelet[2455]: I0113 20:19:20.944833 2455 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:19:20.944940 kubelet[2455]: E0113 20:19:20.944879 2455 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:20.952717 kubelet[2455]: W0113 20:19:20.952661 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.952898 kubelet[2455]: E0113 20:19:20.952724 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:20.960223 kubelet[2455]: I0113 20:19:20.959934 2455 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:20.960223 kubelet[2455]: I0113 20:19:20.960058 2455 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:20.960223 kubelet[2455]: I0113 20:19:20.960103 2455 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:20.962554 kubelet[2455]: I0113 20:19:20.962528 2455 policy_none.go:49] "None policy: Start" Jan 13 20:19:20.963353 kubelet[2455]: I0113 20:19:20.963336 2455 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:20.963453 kubelet[2455]: I0113 20:19:20.963443 2455 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:20.969676 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:19:20.984179 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:19:20.989647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:19:20.998244 kubelet[2455]: I0113 20:19:20.997977 2455 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:20.998377 kubelet[2455]: I0113 20:19:20.998333 2455 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:20.998632 kubelet[2455]: I0113 20:19:20.998491 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:21.001929 kubelet[2455]: E0113 20:19:21.001400 2455 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-d-1c931fd560\" not found" Jan 13 20:19:21.031821 kubelet[2455]: I0113 20:19:21.031734 2455 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.032325 kubelet[2455]: E0113 20:19:21.032278 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.206:6443/api/v1/nodes\": dial tcp 138.199.153.206:6443: connect: connection refused" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.045768 kubelet[2455]: I0113 20:19:21.045544 2455 topology_manager.go:215] "Topology Admit Handler" podUID="413a03ee97e26adad7b8faf538703b1b" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.049279 kubelet[2455]: I0113 20:19:21.048639 2455 topology_manager.go:215] "Topology Admit Handler" podUID="d02b0561f42e90f889301965039b715b" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.052496 kubelet[2455]: I0113 20:19:21.052301 2455 topology_manager.go:215] "Topology Admit Handler" podUID="90b634a0f801ba0ae39c4b7d620a6a7d" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.059308 systemd[1]: Created slice kubepods-burstable-pod413a03ee97e26adad7b8faf538703b1b.slice - libcontainer container kubepods-burstable-pod413a03ee97e26adad7b8faf538703b1b.slice. Jan 13 20:19:21.087156 systemd[1]: Created slice kubepods-burstable-podd02b0561f42e90f889301965039b715b.slice - libcontainer container kubepods-burstable-podd02b0561f42e90f889301965039b715b.slice. Jan 13 20:19:21.102190 systemd[1]: Created slice kubepods-burstable-pod90b634a0f801ba0ae39c4b7d620a6a7d.slice - libcontainer container kubepods-burstable-pod90b634a0f801ba0ae39c4b7d620a6a7d.slice. Jan 13 20:19:21.134621 kubelet[2455]: E0113 20:19:21.133091 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-d-1c931fd560?timeout=10s\": dial tcp 138.199.153.206:6443: connect: connection refused" interval="400ms" Jan 13 20:19:21.230682 kubelet[2455]: I0113 20:19:21.230598 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d02b0561f42e90f889301965039b715b-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" (UID: \"d02b0561f42e90f889301965039b715b\") " pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.230682 kubelet[2455]: I0113 20:19:21.230673 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d02b0561f42e90f889301965039b715b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" (UID: \"d02b0561f42e90f889301965039b715b\") " pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.230918 kubelet[2455]: I0113 20:19:21.230712 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.230918 kubelet[2455]: I0113 20:19:21.230747 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.230918 kubelet[2455]: I0113 20:19:21.230780 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.230918 kubelet[2455]: I0113 20:19:21.230811 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.230918 kubelet[2455]: I0113 20:19:21.230842 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/413a03ee97e26adad7b8faf538703b1b-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-d-1c931fd560\" (UID: \"413a03ee97e26adad7b8faf538703b1b\") " pod="kube-system/kube-scheduler-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.231266 kubelet[2455]: I0113 20:19:21.230914 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d02b0561f42e90f889301965039b715b-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" (UID: \"d02b0561f42e90f889301965039b715b\") " pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.231266 kubelet[2455]: I0113 20:19:21.230945 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.235195 kubelet[2455]: I0113 20:19:21.235160 2455 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.235919 kubelet[2455]: E0113 20:19:21.235852 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.206:6443/api/v1/nodes\": dial tcp 138.199.153.206:6443: connect: connection refused" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.383366 containerd[1499]: time="2025-01-13T20:19:21.383302678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-d-1c931fd560,Uid:413a03ee97e26adad7b8faf538703b1b,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:21.399759 containerd[1499]: time="2025-01-13T20:19:21.399068217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-d-1c931fd560,Uid:d02b0561f42e90f889301965039b715b,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:21.405918 containerd[1499]: time="2025-01-13T20:19:21.405846161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-d-1c931fd560,Uid:90b634a0f801ba0ae39c4b7d620a6a7d,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:21.534334 kubelet[2455]: E0113 20:19:21.534257 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-d-1c931fd560?timeout=10s\": dial tcp 138.199.153.206:6443: connect: connection refused" interval="800ms" Jan 13 20:19:21.639170 kubelet[2455]: I0113 20:19:21.638783 2455 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.639170 kubelet[2455]: E0113 20:19:21.639133 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.206:6443/api/v1/nodes\": dial tcp 138.199.153.206:6443: connect: connection refused" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:21.739644 kubelet[2455]: W0113 20:19:21.739429 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-d-1c931fd560&limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:21.739644 kubelet[2455]: E0113 20:19:21.739525 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-d-1c931fd560&limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:21.919684 kubelet[2455]: W0113 20:19:21.919475 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.206:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:21.919684 kubelet[2455]: E0113 20:19:21.919570 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.206:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:21.924300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514872919.mount: Deactivated successfully. Jan 13 20:19:21.929567 containerd[1499]: time="2025-01-13T20:19:21.929189725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.931138 containerd[1499]: time="2025-01-13T20:19:21.931089791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:19:21.934424 containerd[1499]: time="2025-01-13T20:19:21.934310591Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.936735 containerd[1499]: time="2025-01-13T20:19:21.936687754Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.937684 containerd[1499]: time="2025-01-13T20:19:21.937618788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:21.940439 containerd[1499]: time="2025-01-13T20:19:21.940384091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.943117 containerd[1499]: time="2025-01-13T20:19:21.942319955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.916563ms" Jan 13 20:19:21.943117 containerd[1499]: time="2025-01-13T20:19:21.942583382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:21.943117 containerd[1499]: time="2025-01-13T20:19:21.943092557Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:21.952255 containerd[1499]: time="2025-01-13T20:19:21.951430304Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.232013ms" Jan 13 20:19:21.955471 containerd[1499]: time="2025-01-13T20:19:21.955179318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.194803ms" Jan 13 20:19:21.969376 kubelet[2455]: W0113 20:19:21.969308 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:21.969376 kubelet[2455]: E0113 20:19:21.969355 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:22.053394 containerd[1499]: time="2025-01-13T20:19:22.053144809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:22.053394 containerd[1499]: time="2025-01-13T20:19:22.053318081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:22.053569 containerd[1499]: time="2025-01-13T20:19:22.053331960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.053798 containerd[1499]: time="2025-01-13T20:19:22.053729420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:22.053860 containerd[1499]: time="2025-01-13T20:19:22.053836815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:22.053913 containerd[1499]: time="2025-01-13T20:19:22.053867694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.054139 containerd[1499]: time="2025-01-13T20:19:22.054092963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.054297 containerd[1499]: time="2025-01-13T20:19:22.054219676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.057772 containerd[1499]: time="2025-01-13T20:19:22.057664427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:22.059976 containerd[1499]: time="2025-01-13T20:19:22.059882398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:22.060169 containerd[1499]: time="2025-01-13T20:19:22.060045590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.061238 containerd[1499]: time="2025-01-13T20:19:22.060326977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:22.083392 systemd[1]: Started cri-containerd-00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f.scope - libcontainer container 00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f. Jan 13 20:19:22.089282 systemd[1]: Started cri-containerd-8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe.scope - libcontainer container 8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe. Jan 13 20:19:22.091037 systemd[1]: Started cri-containerd-8f86180f1d0a2fcbd0e6752a013d440b135094c7e335d058425f05732e198cdc.scope - libcontainer container 8f86180f1d0a2fcbd0e6752a013d440b135094c7e335d058425f05732e198cdc. Jan 13 20:19:22.148454 containerd[1499]: time="2025-01-13T20:19:22.148399693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-d-1c931fd560,Uid:d02b0561f42e90f889301965039b715b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f86180f1d0a2fcbd0e6752a013d440b135094c7e335d058425f05732e198cdc\"" Jan 13 20:19:22.155926 containerd[1499]: time="2025-01-13T20:19:22.155851767Z" level=info msg="CreateContainer within sandbox \"8f86180f1d0a2fcbd0e6752a013d440b135094c7e335d058425f05732e198cdc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:19:22.156167 containerd[1499]: time="2025-01-13T20:19:22.156137473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-d-1c931fd560,Uid:90b634a0f801ba0ae39c4b7d620a6a7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f\"" Jan 13 20:19:22.160571 containerd[1499]: time="2025-01-13T20:19:22.160534018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-d-1c931fd560,Uid:413a03ee97e26adad7b8faf538703b1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe\"" Jan 13 20:19:22.161099 containerd[1499]: time="2025-01-13T20:19:22.160915879Z" level=info msg="CreateContainer within sandbox \"00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:19:22.165631 containerd[1499]: time="2025-01-13T20:19:22.165425777Z" level=info msg="CreateContainer within sandbox \"8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:19:22.181901 containerd[1499]: time="2025-01-13T20:19:22.181583224Z" level=info msg="CreateContainer within sandbox \"8f86180f1d0a2fcbd0e6752a013d440b135094c7e335d058425f05732e198cdc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e3a0dd36b8c8e1c1c8ff69b2baa6fd9295eea447fdea07afa1bac5b118c5141\"" Jan 13 20:19:22.182672 containerd[1499]: time="2025-01-13T20:19:22.182643452Z" level=info msg="StartContainer for \"3e3a0dd36b8c8e1c1c8ff69b2baa6fd9295eea447fdea07afa1bac5b118c5141\"" Jan 13 20:19:22.188687 containerd[1499]: time="2025-01-13T20:19:22.188642918Z" level=info msg="CreateContainer within sandbox \"00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c\"" Jan 13 20:19:22.189471 containerd[1499]: time="2025-01-13T20:19:22.189364602Z" level=info msg="StartContainer for \"7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c\"" Jan 13 20:19:22.195012 containerd[1499]: time="2025-01-13T20:19:22.194903250Z" level=info msg="CreateContainer within sandbox \"8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e\"" Jan 13 20:19:22.195854 containerd[1499]: time="2025-01-13T20:19:22.195827685Z" level=info msg="StartContainer for \"6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e\"" Jan 13 20:19:22.222856 systemd[1]: Started cri-containerd-3e3a0dd36b8c8e1c1c8ff69b2baa6fd9295eea447fdea07afa1bac5b118c5141.scope - libcontainer container 3e3a0dd36b8c8e1c1c8ff69b2baa6fd9295eea447fdea07afa1bac5b118c5141. Jan 13 20:19:22.234429 systemd[1]: Started cri-containerd-6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e.scope - libcontainer container 6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e. Jan 13 20:19:22.239489 systemd[1]: Started cri-containerd-7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c.scope - libcontainer container 7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c. Jan 13 20:19:22.292738 containerd[1499]: time="2025-01-13T20:19:22.292661892Z" level=info msg="StartContainer for \"7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c\" returns successfully" Jan 13 20:19:22.292858 containerd[1499]: time="2025-01-13T20:19:22.292804285Z" level=info msg="StartContainer for \"3e3a0dd36b8c8e1c1c8ff69b2baa6fd9295eea447fdea07afa1bac5b118c5141\" returns successfully" Jan 13 20:19:22.318606 containerd[1499]: time="2025-01-13T20:19:22.318403868Z" level=info msg="StartContainer for \"6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e\" returns successfully" Jan 13 20:19:22.334267 kubelet[2455]: W0113 20:19:22.334180 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:22.334629 kubelet[2455]: E0113 20:19:22.334437 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.206:6443: connect: connection refused Jan 13 20:19:22.334753 kubelet[2455]: E0113 20:19:22.334706 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-d-1c931fd560?timeout=10s\": dial tcp 138.199.153.206:6443: connect: connection refused" interval="1.6s" Jan 13 20:19:22.441837 kubelet[2455]: I0113 20:19:22.441804 2455 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:22.442137 kubelet[2455]: E0113 20:19:22.442103 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.206:6443/api/v1/nodes\": dial tcp 138.199.153.206:6443: connect: connection refused" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:24.044542 kubelet[2455]: I0113 20:19:24.044506 2455 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:24.765447 kubelet[2455]: E0113 20:19:24.765403 2455 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-d-1c931fd560\" not found" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:24.882702 kubelet[2455]: I0113 20:19:24.882647 2455 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:24.907977 kubelet[2455]: I0113 20:19:24.906942 2455 apiserver.go:52] "Watching apiserver" Jan 13 20:19:24.930318 kubelet[2455]: I0113 20:19:24.930233 2455 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:19:26.791035 systemd[1]: Reloading requested from client PID 2727 ('systemctl') (unit session-7.scope)... Jan 13 20:19:26.791053 systemd[1]: Reloading... Jan 13 20:19:26.885264 zram_generator::config[2764]: No configuration found. Jan 13 20:19:27.003531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:27.084582 systemd[1]: Reloading finished in 293 ms. Jan 13 20:19:27.124983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:27.142713 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:19:27.143767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:27.143834 systemd[1]: kubelet.service: Consumed 1.502s CPU time, 113.5M memory peak, 0B memory swap peak. Jan 13 20:19:27.150688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:27.273558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:27.277600 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:27.332580 kubelet[2812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:27.332971 kubelet[2812]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:27.333027 kubelet[2812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:27.333185 kubelet[2812]: I0113 20:19:27.333153 2812 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:27.338675 kubelet[2812]: I0113 20:19:27.337828 2812 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:19:27.338675 kubelet[2812]: I0113 20:19:27.338390 2812 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:27.338913 kubelet[2812]: I0113 20:19:27.338891 2812 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:19:27.340615 kubelet[2812]: I0113 20:19:27.340591 2812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:19:27.342830 kubelet[2812]: I0113 20:19:27.342718 2812 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:27.352607 kubelet[2812]: I0113 20:19:27.352347 2812 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:27.352758 kubelet[2812]: I0113 20:19:27.352557 2812 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:27.352982 kubelet[2812]: I0113 20:19:27.352804 2812 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-d-1c931fd560","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:19:27.353108 kubelet[2812]: I0113 20:19:27.353094 2812 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:27.353155 kubelet[2812]: I0113 20:19:27.353148 2812 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:19:27.353308 kubelet[2812]: I0113 20:19:27.353292 2812 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:27.354025 kubelet[2812]: I0113 20:19:27.353496 2812 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:19:27.354025 kubelet[2812]: I0113 20:19:27.353518 2812 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:27.354025 kubelet[2812]: I0113 20:19:27.353550 2812 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:19:27.354025 kubelet[2812]: I0113 20:19:27.353567 2812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:27.354562 kubelet[2812]: I0113 20:19:27.354538 2812 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:27.354798 kubelet[2812]: I0113 20:19:27.354772 2812 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:27.355355 kubelet[2812]: I0113 20:19:27.355327 2812 server.go:1264] "Started kubelet" Jan 13 20:19:27.358288 kubelet[2812]: I0113 20:19:27.358151 2812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:27.367552 kubelet[2812]: I0113 20:19:27.367487 2812 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:27.368195 kubelet[2812]: I0113 20:19:27.368170 2812 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:19:27.373235 kubelet[2812]: I0113 20:19:27.371665 2812 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:19:27.373662 kubelet[2812]: I0113 20:19:27.373597 2812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:27.373845 kubelet[2812]: I0113 20:19:27.373825 2812 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:19:27.373963 kubelet[2812]: I0113 20:19:27.373949 2812 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:27.376568 kubelet[2812]: I0113 20:19:27.374175 2812 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:27.378595 kubelet[2812]: I0113 20:19:27.378559 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:27.382256 kubelet[2812]: I0113 20:19:27.381003 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:27.382256 kubelet[2812]: I0113 20:19:27.381040 2812 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:27.382256 kubelet[2812]: I0113 20:19:27.381054 2812 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:19:27.382256 kubelet[2812]: E0113 20:19:27.381096 2812 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:27.409442 kubelet[2812]: I0113 20:19:27.408820 2812 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:27.409442 kubelet[2812]: I0113 20:19:27.408849 2812 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:27.409442 kubelet[2812]: I0113 20:19:27.408936 2812 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:27.415247 kubelet[2812]: E0113 20:19:27.413186 2812 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:27.462064 kubelet[2812]: I0113 20:19:27.462032 2812 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:27.462064 kubelet[2812]: I0113 20:19:27.462058 2812 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:27.462286 kubelet[2812]: I0113 20:19:27.462088 2812 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:27.462384 kubelet[2812]: I0113 20:19:27.462359 2812 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:19:27.462421 kubelet[2812]: I0113 20:19:27.462385 2812 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:19:27.462421 kubelet[2812]: I0113 20:19:27.462415 2812 policy_none.go:49] "None policy: Start" Jan 13 20:19:27.463629 kubelet[2812]: I0113 20:19:27.463590 2812 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:27.464289 kubelet[2812]: I0113 20:19:27.463756 2812 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:27.464289 kubelet[2812]: I0113 20:19:27.463914 2812 state_mem.go:75] "Updated machine memory state" Jan 13 20:19:27.469147 kubelet[2812]: I0113 20:19:27.468637 2812 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:27.469147 kubelet[2812]: I0113 20:19:27.468823 2812 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:27.469147 kubelet[2812]: I0113 20:19:27.468953 2812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:27.476931 kubelet[2812]: I0113 20:19:27.476905 2812 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.482959 kubelet[2812]: I0113 20:19:27.482326 2812 topology_manager.go:215] "Topology Admit Handler" podUID="d02b0561f42e90f889301965039b715b" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.482959 kubelet[2812]: I0113 20:19:27.482439 2812 topology_manager.go:215] "Topology Admit Handler" podUID="90b634a0f801ba0ae39c4b7d620a6a7d" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.482959 kubelet[2812]: I0113 20:19:27.482474 2812 topology_manager.go:215] "Topology Admit Handler" podUID="413a03ee97e26adad7b8faf538703b1b" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.506510 kubelet[2812]: I0113 20:19:27.506465 2812 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.506664 kubelet[2812]: I0113 20:19:27.506567 2812 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.576997 kubelet[2812]: I0113 20:19:27.576906 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.577323 kubelet[2812]: I0113 20:19:27.577303 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.577612 kubelet[2812]: I0113 20:19:27.577580 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/413a03ee97e26adad7b8faf538703b1b-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-d-1c931fd560\" (UID: \"413a03ee97e26adad7b8faf538703b1b\") " pod="kube-system/kube-scheduler-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.577748 kubelet[2812]: I0113 20:19:27.577694 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d02b0561f42e90f889301965039b715b-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" (UID: \"d02b0561f42e90f889301965039b715b\") " pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.577748 kubelet[2812]: I0113 20:19:27.577716 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d02b0561f42e90f889301965039b715b-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" (UID: \"d02b0561f42e90f889301965039b715b\") " pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.577893 kubelet[2812]: I0113 20:19:27.577852 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.578016 kubelet[2812]: I0113 20:19:27.577876 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d02b0561f42e90f889301965039b715b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" (UID: \"d02b0561f42e90f889301965039b715b\") " pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.578016 kubelet[2812]: I0113 20:19:27.577985 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.578119 kubelet[2812]: I0113 20:19:27.578004 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/90b634a0f801ba0ae39c4b7d620a6a7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-d-1c931fd560\" (UID: \"90b634a0f801ba0ae39c4b7d620a6a7d\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:27.788677 sudo[2845]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:19:27.789925 sudo[2845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:19:28.276677 sudo[2845]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:28.363202 kubelet[2812]: I0113 20:19:28.362805 2812 apiserver.go:52] "Watching apiserver" Jan 13 20:19:28.375087 kubelet[2812]: I0113 20:19:28.374818 2812 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:19:28.457914 kubelet[2812]: E0113 20:19:28.457868 2812 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-d-1c931fd560\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" Jan 13 20:19:28.490313 kubelet[2812]: I0113 20:19:28.490228 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-d-1c931fd560" podStartSLOduration=1.490210251 podStartE2EDuration="1.490210251s" podCreationTimestamp="2025-01-13 20:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:28.472657712 +0000 UTC m=+1.190088086" watchObservedRunningTime="2025-01-13 20:19:28.490210251 +0000 UTC m=+1.207640625" Jan 13 20:19:28.506572 kubelet[2812]: I0113 20:19:28.506099 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-d-1c931fd560" podStartSLOduration=1.5060727489999999 podStartE2EDuration="1.506072749s" podCreationTimestamp="2025-01-13 20:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:28.49151231 +0000 UTC m=+1.208942684" watchObservedRunningTime="2025-01-13 20:19:28.506072749 +0000 UTC m=+1.223503123" Jan 13 20:19:28.523923 kubelet[2812]: I0113 20:19:28.523703 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-d-1c931fd560" podStartSLOduration=1.5236812450000001 podStartE2EDuration="1.523681245s" podCreationTimestamp="2025-01-13 20:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:28.50732809 +0000 UTC m=+1.224758504" watchObservedRunningTime="2025-01-13 20:19:28.523681245 +0000 UTC m=+1.241111659" Jan 13 20:19:30.283744 sudo[1872]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:30.444641 sshd[1871]: Connection closed by 147.75.109.163 port 45610 Jan 13 20:19:30.446315 sshd-session[1869]: pam_unix(sshd:session): session closed for user core Jan 13 20:19:30.453333 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:19:30.453931 systemd[1]: sshd@6-138.199.153.206:22-147.75.109.163:45610.service: Deactivated successfully. Jan 13 20:19:30.456038 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:19:30.456507 systemd[1]: session-7.scope: Consumed 9.076s CPU time, 187.7M memory peak, 0B memory swap peak. Jan 13 20:19:30.457693 systemd-logind[1466]: Removed session 7. Jan 13 20:19:41.164545 kubelet[2812]: I0113 20:19:41.164424 2812 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:19:41.168006 containerd[1499]: time="2025-01-13T20:19:41.165495102Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:19:41.168677 kubelet[2812]: I0113 20:19:41.168627 2812 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:19:41.786497 kubelet[2812]: I0113 20:19:41.786442 2812 topology_manager.go:215] "Topology Admit Handler" podUID="a041f360-da81-4c76-b8fa-11ad7c8fb094" podNamespace="kube-system" podName="kube-proxy-c6xxh" Jan 13 20:19:41.800219 systemd[1]: Created slice kubepods-besteffort-poda041f360_da81_4c76_b8fa_11ad7c8fb094.slice - libcontainer container kubepods-besteffort-poda041f360_da81_4c76_b8fa_11ad7c8fb094.slice. Jan 13 20:19:41.804551 kubelet[2812]: I0113 20:19:41.804506 2812 topology_manager.go:215] "Topology Admit Handler" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" podNamespace="kube-system" podName="cilium-p666v" Jan 13 20:19:41.805591 kubelet[2812]: W0113 20:19:41.805547 2812 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-0-d-1c931fd560" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-d-1c931fd560' and this object Jan 13 20:19:41.805738 kubelet[2812]: E0113 20:19:41.805600 2812 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-0-d-1c931fd560" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-d-1c931fd560' and this object Jan 13 20:19:41.805738 kubelet[2812]: W0113 20:19:41.805640 2812 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-0-d-1c931fd560" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-d-1c931fd560' and this object Jan 13 20:19:41.805738 kubelet[2812]: E0113 20:19:41.805649 2812 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-0-d-1c931fd560" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-d-1c931fd560' and this object Jan 13 20:19:41.818189 systemd[1]: Created slice kubepods-burstable-pod6092ab0d_be62_40fb_9b18_c219712a481a.slice - libcontainer container kubepods-burstable-pod6092ab0d_be62_40fb_9b18_c219712a481a.slice. Jan 13 20:19:41.867024 kubelet[2812]: I0113 20:19:41.866923 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-net\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867024 kubelet[2812]: I0113 20:19:41.866973 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a041f360-da81-4c76-b8fa-11ad7c8fb094-lib-modules\") pod \"kube-proxy-c6xxh\" (UID: \"a041f360-da81-4c76-b8fa-11ad7c8fb094\") " pod="kube-system/kube-proxy-c6xxh" Jan 13 20:19:41.867024 kubelet[2812]: I0113 20:19:41.866993 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-xtables-lock\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867024 kubelet[2812]: I0113 20:19:41.867010 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-hubble-tls\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867024 kubelet[2812]: I0113 20:19:41.867028 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-etc-cni-netd\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867406 kubelet[2812]: I0113 20:19:41.867043 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-lib-modules\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867406 kubelet[2812]: I0113 20:19:41.867059 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a041f360-da81-4c76-b8fa-11ad7c8fb094-kube-proxy\") pod \"kube-proxy-c6xxh\" (UID: \"a041f360-da81-4c76-b8fa-11ad7c8fb094\") " pod="kube-system/kube-proxy-c6xxh" Jan 13 20:19:41.867406 kubelet[2812]: I0113 20:19:41.867074 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-run\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867406 kubelet[2812]: I0113 20:19:41.867088 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-cgroup\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.867406 kubelet[2812]: I0113 20:19:41.867131 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cni-path\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868368 kubelet[2812]: I0113 20:19:41.867638 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-bpf-maps\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868368 kubelet[2812]: I0113 20:19:41.868284 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-config-path\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868368 kubelet[2812]: I0113 20:19:41.868325 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-kernel\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868368 kubelet[2812]: I0113 20:19:41.868345 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-hostproc\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868368 kubelet[2812]: I0113 20:19:41.868379 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6092ab0d-be62-40fb-9b18-c219712a481a-clustermesh-secrets\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868740 kubelet[2812]: I0113 20:19:41.868417 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmp6\" (UniqueName: \"kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-kube-api-access-7nmp6\") pod \"cilium-p666v\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " pod="kube-system/cilium-p666v" Jan 13 20:19:41.868740 kubelet[2812]: I0113 20:19:41.868473 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a041f360-da81-4c76-b8fa-11ad7c8fb094-xtables-lock\") pod \"kube-proxy-c6xxh\" (UID: \"a041f360-da81-4c76-b8fa-11ad7c8fb094\") " pod="kube-system/kube-proxy-c6xxh" Jan 13 20:19:41.868740 kubelet[2812]: I0113 20:19:41.868507 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2lqx\" (UniqueName: \"kubernetes.io/projected/a041f360-da81-4c76-b8fa-11ad7c8fb094-kube-api-access-t2lqx\") pod \"kube-proxy-c6xxh\" (UID: \"a041f360-da81-4c76-b8fa-11ad7c8fb094\") " pod="kube-system/kube-proxy-c6xxh" Jan 13 20:19:42.239377 kubelet[2812]: I0113 20:19:42.238823 2812 topology_manager.go:215] "Topology Admit Handler" podUID="c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa" podNamespace="kube-system" podName="cilium-operator-599987898-9x54r" Jan 13 20:19:42.253676 systemd[1]: Created slice kubepods-besteffort-podc5d96d57_bd3c_4987_a6e1_dc2d1179d4fa.slice - libcontainer container kubepods-besteffort-podc5d96d57_bd3c_4987_a6e1_dc2d1179d4fa.slice. Jan 13 20:19:42.272728 kubelet[2812]: I0113 20:19:42.272677 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-cilium-config-path\") pod \"cilium-operator-599987898-9x54r\" (UID: \"c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa\") " pod="kube-system/cilium-operator-599987898-9x54r" Jan 13 20:19:42.273256 kubelet[2812]: I0113 20:19:42.273010 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kz7\" (UniqueName: \"kubernetes.io/projected/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-kube-api-access-w8kz7\") pod \"cilium-operator-599987898-9x54r\" (UID: \"c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa\") " pod="kube-system/cilium-operator-599987898-9x54r" Jan 13 20:19:42.995328 kubelet[2812]: E0113 20:19:42.994331 2812 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:42.995328 kubelet[2812]: E0113 20:19:42.994391 2812 projected.go:200] Error preparing data for projected volume kube-api-access-t2lqx for pod kube-system/kube-proxy-c6xxh: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:42.995328 kubelet[2812]: E0113 20:19:42.994517 2812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a041f360-da81-4c76-b8fa-11ad7c8fb094-kube-api-access-t2lqx podName:a041f360-da81-4c76-b8fa-11ad7c8fb094 nodeName:}" failed. No retries permitted until 2025-01-13 20:19:43.494483707 +0000 UTC m=+16.211914121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2lqx" (UniqueName: "kubernetes.io/projected/a041f360-da81-4c76-b8fa-11ad7c8fb094-kube-api-access-t2lqx") pod "kube-proxy-c6xxh" (UID: "a041f360-da81-4c76-b8fa-11ad7c8fb094") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:42.998183 kubelet[2812]: E0113 20:19:42.998018 2812 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:42.998183 kubelet[2812]: E0113 20:19:42.998059 2812 projected.go:200] Error preparing data for projected volume kube-api-access-7nmp6 for pod kube-system/cilium-p666v: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:42.998183 kubelet[2812]: E0113 20:19:42.998123 2812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-kube-api-access-7nmp6 podName:6092ab0d-be62-40fb-9b18-c219712a481a nodeName:}" failed. No retries permitted until 2025-01-13 20:19:43.498100712 +0000 UTC m=+16.215531086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7nmp6" (UniqueName: "kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-kube-api-access-7nmp6") pod "cilium-p666v" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:43.462716 containerd[1499]: time="2025-01-13T20:19:43.462585173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9x54r,Uid:c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:43.492295 containerd[1499]: time="2025-01-13T20:19:43.491956563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:43.492295 containerd[1499]: time="2025-01-13T20:19:43.492021640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:43.492295 containerd[1499]: time="2025-01-13T20:19:43.492071358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:43.492739 containerd[1499]: time="2025-01-13T20:19:43.492302588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:43.512270 systemd[1]: run-containerd-runc-k8s.io-339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58-runc.i1h7yt.mount: Deactivated successfully. Jan 13 20:19:43.518412 systemd[1]: Started cri-containerd-339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58.scope - libcontainer container 339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58. Jan 13 20:19:43.554042 containerd[1499]: time="2025-01-13T20:19:43.553929524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9x54r,Uid:c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\"" Jan 13 20:19:43.556677 containerd[1499]: time="2025-01-13T20:19:43.556434177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:19:43.610980 containerd[1499]: time="2025-01-13T20:19:43.610709626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6xxh,Uid:a041f360-da81-4c76-b8fa-11ad7c8fb094,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:43.623252 containerd[1499]: time="2025-01-13T20:19:43.622899707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p666v,Uid:6092ab0d-be62-40fb-9b18-c219712a481a,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:43.635547 containerd[1499]: time="2025-01-13T20:19:43.635312538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:43.635547 containerd[1499]: time="2025-01-13T20:19:43.635373816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:43.635547 containerd[1499]: time="2025-01-13T20:19:43.635392015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:43.635747 containerd[1499]: time="2025-01-13T20:19:43.635503130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:43.656578 containerd[1499]: time="2025-01-13T20:19:43.656335043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:43.656578 containerd[1499]: time="2025-01-13T20:19:43.656406640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:43.656578 containerd[1499]: time="2025-01-13T20:19:43.656423839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:43.656578 containerd[1499]: time="2025-01-13T20:19:43.656518315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:43.658412 systemd[1]: Started cri-containerd-95cd54cab17140811324f8dbc05e8bed771444a1b4906136367ecc70984e0d67.scope - libcontainer container 95cd54cab17140811324f8dbc05e8bed771444a1b4906136367ecc70984e0d67. Jan 13 20:19:43.679478 systemd[1]: Started cri-containerd-6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d.scope - libcontainer container 6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d. Jan 13 20:19:43.696081 containerd[1499]: time="2025-01-13T20:19:43.696019433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6xxh,Uid:a041f360-da81-4c76-b8fa-11ad7c8fb094,Namespace:kube-system,Attempt:0,} returns sandbox id \"95cd54cab17140811324f8dbc05e8bed771444a1b4906136367ecc70984e0d67\"" Jan 13 20:19:43.701657 containerd[1499]: time="2025-01-13T20:19:43.701599716Z" level=info msg="CreateContainer within sandbox \"95cd54cab17140811324f8dbc05e8bed771444a1b4906136367ecc70984e0d67\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:19:43.722979 containerd[1499]: time="2025-01-13T20:19:43.722642180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p666v,Uid:6092ab0d-be62-40fb-9b18-c219712a481a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\"" Jan 13 20:19:43.726916 containerd[1499]: time="2025-01-13T20:19:43.726872200Z" level=info msg="CreateContainer within sandbox \"95cd54cab17140811324f8dbc05e8bed771444a1b4906136367ecc70984e0d67\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a813ddf9165d3f33a1da1cc0b65244ba57a2dc4d85f2735cac64a3bb0787bfe3\"" Jan 13 20:19:43.729494 containerd[1499]: time="2025-01-13T20:19:43.727635967Z" level=info msg="StartContainer for \"a813ddf9165d3f33a1da1cc0b65244ba57a2dc4d85f2735cac64a3bb0787bfe3\"" Jan 13 20:19:43.757542 systemd[1]: Started cri-containerd-a813ddf9165d3f33a1da1cc0b65244ba57a2dc4d85f2735cac64a3bb0787bfe3.scope - libcontainer container a813ddf9165d3f33a1da1cc0b65244ba57a2dc4d85f2735cac64a3bb0787bfe3. Jan 13 20:19:43.789329 containerd[1499]: time="2025-01-13T20:19:43.789285342Z" level=info msg="StartContainer for \"a813ddf9165d3f33a1da1cc0b65244ba57a2dc4d85f2735cac64a3bb0787bfe3\" returns successfully" Jan 13 20:19:44.499898 kubelet[2812]: I0113 20:19:44.499073 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c6xxh" podStartSLOduration=3.499055587 podStartE2EDuration="3.499055587s" podCreationTimestamp="2025-01-13 20:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:44.49827538 +0000 UTC m=+17.215705754" watchObservedRunningTime="2025-01-13 20:19:44.499055587 +0000 UTC m=+17.216485921" Jan 13 20:19:45.137448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768892394.mount: Deactivated successfully. Jan 13 20:19:50.203257 containerd[1499]: time="2025-01-13T20:19:50.203044117Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:50.204818 containerd[1499]: time="2025-01-13T20:19:50.204739287Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137694" Jan 13 20:19:50.206241 containerd[1499]: time="2025-01-13T20:19:50.205876640Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:50.207776 containerd[1499]: time="2025-01-13T20:19:50.207649407Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.651174632s" Jan 13 20:19:50.207776 containerd[1499]: time="2025-01-13T20:19:50.207687806Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:19:50.211347 containerd[1499]: time="2025-01-13T20:19:50.211036468Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:19:50.214379 containerd[1499]: time="2025-01-13T20:19:50.214179338Z" level=info msg="CreateContainer within sandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:19:50.232973 containerd[1499]: time="2025-01-13T20:19:50.232922726Z" level=info msg="CreateContainer within sandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\"" Jan 13 20:19:50.233584 containerd[1499]: time="2025-01-13T20:19:50.233559859Z" level=info msg="StartContainer for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\"" Jan 13 20:19:50.274628 systemd[1]: Started cri-containerd-1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c.scope - libcontainer container 1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c. Jan 13 20:19:50.304987 containerd[1499]: time="2025-01-13T20:19:50.304932238Z" level=info msg="StartContainer for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" returns successfully" Jan 13 20:19:50.540610 kubelet[2812]: I0113 20:19:50.540533 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9x54r" podStartSLOduration=1.8868332410000002 podStartE2EDuration="8.54048993s" podCreationTimestamp="2025-01-13 20:19:42 +0000 UTC" firstStartedPulling="2025-01-13 20:19:43.555818723 +0000 UTC m=+16.273249097" lastFinishedPulling="2025-01-13 20:19:50.209475332 +0000 UTC m=+22.926905786" observedRunningTime="2025-01-13 20:19:50.540270939 +0000 UTC m=+23.257701313" watchObservedRunningTime="2025-01-13 20:19:50.54048993 +0000 UTC m=+23.257920264" Jan 13 20:19:54.546352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503191915.mount: Deactivated successfully. Jan 13 20:19:56.150027 containerd[1499]: time="2025-01-13T20:19:56.149058083Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:56.150556 containerd[1499]: time="2025-01-13T20:19:56.150513065Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651510" Jan 13 20:19:56.151264 containerd[1499]: time="2025-01-13T20:19:56.151238276Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:56.153178 containerd[1499]: time="2025-01-13T20:19:56.153127240Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.941978297s" Jan 13 20:19:56.153178 containerd[1499]: time="2025-01-13T20:19:56.153174158Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:19:56.158063 containerd[1499]: time="2025-01-13T20:19:56.158012963Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:19:56.174761 containerd[1499]: time="2025-01-13T20:19:56.174703531Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\"" Jan 13 20:19:56.175614 containerd[1499]: time="2025-01-13T20:19:56.175575056Z" level=info msg="StartContainer for \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\"" Jan 13 20:19:56.212467 systemd[1]: Started cri-containerd-ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f.scope - libcontainer container ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f. Jan 13 20:19:56.245432 containerd[1499]: time="2025-01-13T20:19:56.245373206Z" level=info msg="StartContainer for \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\" returns successfully" Jan 13 20:19:56.262174 systemd[1]: cri-containerd-ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f.scope: Deactivated successfully. Jan 13 20:19:56.285312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f-rootfs.mount: Deactivated successfully. Jan 13 20:19:56.425657 containerd[1499]: time="2025-01-13T20:19:56.425067252Z" level=info msg="shim disconnected" id=ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f namespace=k8s.io Jan 13 20:19:56.425657 containerd[1499]: time="2025-01-13T20:19:56.425152688Z" level=warning msg="cleaning up after shim disconnected" id=ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f namespace=k8s.io Jan 13 20:19:56.425657 containerd[1499]: time="2025-01-13T20:19:56.425166328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:56.524885 containerd[1499]: time="2025-01-13T20:19:56.524527488Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:19:56.545295 containerd[1499]: time="2025-01-13T20:19:56.545060021Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\"" Jan 13 20:19:56.545891 containerd[1499]: time="2025-01-13T20:19:56.545844029Z" level=info msg="StartContainer for \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\"" Jan 13 20:19:56.575003 systemd[1]: Started cri-containerd-98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9.scope - libcontainer container 98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9. Jan 13 20:19:56.603069 containerd[1499]: time="2025-01-13T20:19:56.602516588Z" level=info msg="StartContainer for \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\" returns successfully" Jan 13 20:19:56.617158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:19:56.617616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:19:56.617696 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:19:56.625152 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:19:56.625643 systemd[1]: cri-containerd-98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9.scope: Deactivated successfully. Jan 13 20:19:56.646540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:19:56.657077 containerd[1499]: time="2025-01-13T20:19:56.656989155Z" level=info msg="shim disconnected" id=98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9 namespace=k8s.io Jan 13 20:19:56.657077 containerd[1499]: time="2025-01-13T20:19:56.657051912Z" level=warning msg="cleaning up after shim disconnected" id=98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9 namespace=k8s.io Jan 13 20:19:56.657077 containerd[1499]: time="2025-01-13T20:19:56.657075591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:57.531673 containerd[1499]: time="2025-01-13T20:19:57.531491184Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:19:57.569686 containerd[1499]: time="2025-01-13T20:19:57.569549497Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\"" Jan 13 20:19:57.570105 containerd[1499]: time="2025-01-13T20:19:57.570083036Z" level=info msg="StartContainer for \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\"" Jan 13 20:19:57.607605 systemd[1]: Started cri-containerd-4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c.scope - libcontainer container 4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c. Jan 13 20:19:57.647295 containerd[1499]: time="2025-01-13T20:19:57.645585287Z" level=info msg="StartContainer for \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\" returns successfully" Jan 13 20:19:57.652165 systemd[1]: cri-containerd-4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c.scope: Deactivated successfully. Jan 13 20:19:57.687114 containerd[1499]: time="2025-01-13T20:19:57.686910709Z" level=info msg="shim disconnected" id=4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c namespace=k8s.io Jan 13 20:19:57.687114 containerd[1499]: time="2025-01-13T20:19:57.686989826Z" level=warning msg="cleaning up after shim disconnected" id=4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c namespace=k8s.io Jan 13 20:19:57.687114 containerd[1499]: time="2025-01-13T20:19:57.687005506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:58.170686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c-rootfs.mount: Deactivated successfully. Jan 13 20:19:58.537305 containerd[1499]: time="2025-01-13T20:19:58.537248551Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:19:58.558331 containerd[1499]: time="2025-01-13T20:19:58.558185634Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\"" Jan 13 20:19:58.558991 containerd[1499]: time="2025-01-13T20:19:58.558961563Z" level=info msg="StartContainer for \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\"" Jan 13 20:19:58.615349 systemd[1]: Started cri-containerd-7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1.scope - libcontainer container 7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1. Jan 13 20:19:58.654585 systemd[1]: cri-containerd-7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1.scope: Deactivated successfully. Jan 13 20:19:58.658490 containerd[1499]: time="2025-01-13T20:19:58.658357069Z" level=info msg="StartContainer for \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\" returns successfully" Jan 13 20:19:58.682506 containerd[1499]: time="2025-01-13T20:19:58.682304831Z" level=info msg="shim disconnected" id=7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1 namespace=k8s.io Jan 13 20:19:58.682506 containerd[1499]: time="2025-01-13T20:19:58.682358189Z" level=warning msg="cleaning up after shim disconnected" id=7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1 namespace=k8s.io Jan 13 20:19:58.682506 containerd[1499]: time="2025-01-13T20:19:58.682365749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:59.170828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1-rootfs.mount: Deactivated successfully. Jan 13 20:19:59.543494 containerd[1499]: time="2025-01-13T20:19:59.543196726Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:19:59.569897 containerd[1499]: time="2025-01-13T20:19:59.569839905Z" level=info msg="CreateContainer within sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\"" Jan 13 20:19:59.570588 containerd[1499]: time="2025-01-13T20:19:59.570541797Z" level=info msg="StartContainer for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\"" Jan 13 20:19:59.602446 systemd[1]: Started cri-containerd-adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b.scope - libcontainer container adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b. Jan 13 20:19:59.634692 containerd[1499]: time="2025-01-13T20:19:59.634610764Z" level=info msg="StartContainer for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" returns successfully" Jan 13 20:19:59.762167 kubelet[2812]: I0113 20:19:59.762118 2812 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:19:59.794176 kubelet[2812]: I0113 20:19:59.794030 2812 topology_manager.go:215] "Topology Admit Handler" podUID="cbf40c9b-a63e-49bd-8c71-695dcb1da865" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5dfcf" Jan 13 20:19:59.802031 kubelet[2812]: I0113 20:19:59.801976 2812 topology_manager.go:215] "Topology Admit Handler" podUID="6408dc2d-b38f-48b6-addd-58ca9a9d38f1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-km7vh" Jan 13 20:19:59.805924 systemd[1]: Created slice kubepods-burstable-podcbf40c9b_a63e_49bd_8c71_695dcb1da865.slice - libcontainer container kubepods-burstable-podcbf40c9b_a63e_49bd_8c71_695dcb1da865.slice. Jan 13 20:19:59.816034 systemd[1]: Created slice kubepods-burstable-pod6408dc2d_b38f_48b6_addd_58ca9a9d38f1.slice - libcontainer container kubepods-burstable-pod6408dc2d_b38f_48b6_addd_58ca9a9d38f1.slice. Jan 13 20:19:59.898140 kubelet[2812]: I0113 20:19:59.898080 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg625\" (UniqueName: \"kubernetes.io/projected/cbf40c9b-a63e-49bd-8c71-695dcb1da865-kube-api-access-kg625\") pod \"coredns-7db6d8ff4d-5dfcf\" (UID: \"cbf40c9b-a63e-49bd-8c71-695dcb1da865\") " pod="kube-system/coredns-7db6d8ff4d-5dfcf" Jan 13 20:19:59.898140 kubelet[2812]: I0113 20:19:59.898143 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fth6\" (UniqueName: \"kubernetes.io/projected/6408dc2d-b38f-48b6-addd-58ca9a9d38f1-kube-api-access-5fth6\") pod \"coredns-7db6d8ff4d-km7vh\" (UID: \"6408dc2d-b38f-48b6-addd-58ca9a9d38f1\") " pod="kube-system/coredns-7db6d8ff4d-km7vh" Jan 13 20:19:59.898381 kubelet[2812]: I0113 20:19:59.898166 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6408dc2d-b38f-48b6-addd-58ca9a9d38f1-config-volume\") pod \"coredns-7db6d8ff4d-km7vh\" (UID: \"6408dc2d-b38f-48b6-addd-58ca9a9d38f1\") " pod="kube-system/coredns-7db6d8ff4d-km7vh" Jan 13 20:19:59.898381 kubelet[2812]: I0113 20:19:59.898187 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf40c9b-a63e-49bd-8c71-695dcb1da865-config-volume\") pod \"coredns-7db6d8ff4d-5dfcf\" (UID: \"cbf40c9b-a63e-49bd-8c71-695dcb1da865\") " pod="kube-system/coredns-7db6d8ff4d-5dfcf" Jan 13 20:20:00.112743 containerd[1499]: time="2025-01-13T20:20:00.111999036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5dfcf,Uid:cbf40c9b-a63e-49bd-8c71-695dcb1da865,Namespace:kube-system,Attempt:0,}" Jan 13 20:20:00.123695 containerd[1499]: time="2025-01-13T20:20:00.123417463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km7vh,Uid:6408dc2d-b38f-48b6-addd-58ca9a9d38f1,Namespace:kube-system,Attempt:0,}" Jan 13 20:20:00.569188 kubelet[2812]: I0113 20:20:00.569029 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p666v" podStartSLOduration=7.138977763 podStartE2EDuration="19.568720697s" podCreationTimestamp="2025-01-13 20:19:41 +0000 UTC" firstStartedPulling="2025-01-13 20:19:43.724371146 +0000 UTC m=+16.441801520" lastFinishedPulling="2025-01-13 20:19:56.15411412 +0000 UTC m=+28.871544454" observedRunningTime="2025-01-13 20:20:00.566311073 +0000 UTC m=+33.283741447" watchObservedRunningTime="2025-01-13 20:20:00.568720697 +0000 UTC m=+33.286151071" Jan 13 20:20:01.803616 systemd-networkd[1386]: cilium_host: Link UP Jan 13 20:20:01.812001 systemd-networkd[1386]: cilium_net: Link UP Jan 13 20:20:01.812458 systemd-networkd[1386]: cilium_net: Gained carrier Jan 13 20:20:01.812618 systemd-networkd[1386]: cilium_host: Gained carrier Jan 13 20:20:01.923928 systemd-networkd[1386]: cilium_vxlan: Link UP Jan 13 20:20:01.923936 systemd-networkd[1386]: cilium_vxlan: Gained carrier Jan 13 20:20:01.948602 systemd-networkd[1386]: cilium_net: Gained IPv6LL Jan 13 20:20:02.215245 kernel: NET: Registered PF_ALG protocol family Jan 13 20:20:02.668464 systemd-networkd[1386]: cilium_host: Gained IPv6LL Jan 13 20:20:02.963411 systemd-networkd[1386]: lxc_health: Link UP Jan 13 20:20:02.969271 systemd-networkd[1386]: lxc_health: Gained carrier Jan 13 20:20:03.168573 systemd-networkd[1386]: lxc76894316c4bc: Link UP Jan 13 20:20:03.179415 kernel: eth0: renamed from tmpe834a Jan 13 20:20:03.185704 systemd-networkd[1386]: lxc76894316c4bc: Gained carrier Jan 13 20:20:03.204822 systemd-networkd[1386]: lxc0c52dc162974: Link UP Jan 13 20:20:03.209365 kernel: eth0: renamed from tmpcf41d Jan 13 20:20:03.212798 systemd-networkd[1386]: lxc0c52dc162974: Gained carrier Jan 13 20:20:03.820395 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Jan 13 20:20:04.396488 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 13 20:20:04.781387 systemd-networkd[1386]: lxc76894316c4bc: Gained IPv6LL Jan 13 20:20:05.037105 systemd-networkd[1386]: lxc0c52dc162974: Gained IPv6LL Jan 13 20:20:07.270040 containerd[1499]: time="2025-01-13T20:20:07.269437157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:20:07.270040 containerd[1499]: time="2025-01-13T20:20:07.269506634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:20:07.270040 containerd[1499]: time="2025-01-13T20:20:07.269518634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.271392 containerd[1499]: time="2025-01-13T20:20:07.271076253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.311044 containerd[1499]: time="2025-01-13T20:20:07.310121374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:20:07.311044 containerd[1499]: time="2025-01-13T20:20:07.310182811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:20:07.311044 containerd[1499]: time="2025-01-13T20:20:07.310195291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.311044 containerd[1499]: time="2025-01-13T20:20:07.310323326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.314434 systemd[1]: Started cri-containerd-e834a9f9b5f6638e1da051ee03ffaf27db782a9a7ef8557bc17ccdc8fcc2035c.scope - libcontainer container e834a9f9b5f6638e1da051ee03ffaf27db782a9a7ef8557bc17ccdc8fcc2035c. Jan 13 20:20:07.346478 systemd[1]: Started cri-containerd-cf41dd9c46f62cfc8f0b96c1895ffc6d10604bfc3ddd195d04b0ae0e63822bb5.scope - libcontainer container cf41dd9c46f62cfc8f0b96c1895ffc6d10604bfc3ddd195d04b0ae0e63822bb5. Jan 13 20:20:07.415427 containerd[1499]: time="2025-01-13T20:20:07.415361638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5dfcf,Uid:cbf40c9b-a63e-49bd-8c71-695dcb1da865,Namespace:kube-system,Attempt:0,} returns sandbox id \"e834a9f9b5f6638e1da051ee03ffaf27db782a9a7ef8557bc17ccdc8fcc2035c\"" Jan 13 20:20:07.432800 containerd[1499]: time="2025-01-13T20:20:07.431795718Z" level=info msg="CreateContainer within sandbox \"e834a9f9b5f6638e1da051ee03ffaf27db782a9a7ef8557bc17ccdc8fcc2035c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:20:07.436556 containerd[1499]: time="2025-01-13T20:20:07.436516815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km7vh,Uid:6408dc2d-b38f-48b6-addd-58ca9a9d38f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf41dd9c46f62cfc8f0b96c1895ffc6d10604bfc3ddd195d04b0ae0e63822bb5\"" Jan 13 20:20:07.443689 containerd[1499]: time="2025-01-13T20:20:07.443650097Z" level=info msg="CreateContainer within sandbox \"cf41dd9c46f62cfc8f0b96c1895ffc6d10604bfc3ddd195d04b0ae0e63822bb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:20:07.465086 containerd[1499]: time="2025-01-13T20:20:07.465021265Z" level=info msg="CreateContainer within sandbox \"e834a9f9b5f6638e1da051ee03ffaf27db782a9a7ef8557bc17ccdc8fcc2035c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a3fd312a02dc321338e715ca48e0bc48aa412750b48598c437365216813e965\"" Jan 13 20:20:07.467362 containerd[1499]: time="2025-01-13T20:20:07.466516967Z" level=info msg="StartContainer for \"4a3fd312a02dc321338e715ca48e0bc48aa412750b48598c437365216813e965\"" Jan 13 20:20:07.467362 containerd[1499]: time="2025-01-13T20:20:07.466628003Z" level=info msg="CreateContainer within sandbox \"cf41dd9c46f62cfc8f0b96c1895ffc6d10604bfc3ddd195d04b0ae0e63822bb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e59f3efaa9b21295a1e70f222f3fd29f6b5ed104d895bbec84498cedf352d57\"" Jan 13 20:20:07.468588 containerd[1499]: time="2025-01-13T20:20:07.468557528Z" level=info msg="StartContainer for \"0e59f3efaa9b21295a1e70f222f3fd29f6b5ed104d895bbec84498cedf352d57\"" Jan 13 20:20:07.500766 systemd[1]: Started cri-containerd-4a3fd312a02dc321338e715ca48e0bc48aa412750b48598c437365216813e965.scope - libcontainer container 4a3fd312a02dc321338e715ca48e0bc48aa412750b48598c437365216813e965. Jan 13 20:20:07.513510 systemd[1]: Started cri-containerd-0e59f3efaa9b21295a1e70f222f3fd29f6b5ed104d895bbec84498cedf352d57.scope - libcontainer container 0e59f3efaa9b21295a1e70f222f3fd29f6b5ed104d895bbec84498cedf352d57. Jan 13 20:20:07.543304 containerd[1499]: time="2025-01-13T20:20:07.542959352Z" level=info msg="StartContainer for \"4a3fd312a02dc321338e715ca48e0bc48aa412750b48598c437365216813e965\" returns successfully" Jan 13 20:20:07.558349 containerd[1499]: time="2025-01-13T20:20:07.557893011Z" level=info msg="StartContainer for \"0e59f3efaa9b21295a1e70f222f3fd29f6b5ed104d895bbec84498cedf352d57\" returns successfully" Jan 13 20:20:07.595910 kubelet[2812]: I0113 20:20:07.595828 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-km7vh" podStartSLOduration=25.595812815 podStartE2EDuration="25.595812815s" podCreationTimestamp="2025-01-13 20:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:20:07.593517664 +0000 UTC m=+40.310948038" watchObservedRunningTime="2025-01-13 20:20:07.595812815 +0000 UTC m=+40.313243189" Jan 13 20:20:07.624253 kubelet[2812]: I0113 20:20:07.623304 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5dfcf" podStartSLOduration=25.623275626 podStartE2EDuration="25.623275626s" podCreationTimestamp="2025-01-13 20:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:20:07.622111071 +0000 UTC m=+40.339541445" watchObservedRunningTime="2025-01-13 20:20:07.623275626 +0000 UTC m=+40.340706040" Jan 13 20:20:08.279329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712019882.mount: Deactivated successfully. Jan 13 20:23:08.983882 update_engine[1469]: I20250113 20:23:08.983758 1469 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 20:23:08.983882 update_engine[1469]: I20250113 20:23:08.983847 1469 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.984175 1469 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985119 1469 omaha_request_params.cc:62] Current group set to stable Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985302 1469 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985322 1469 update_attempter.cc:643] Scheduling an action processor start. Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985569 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985658 1469 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985852 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985869 1469 omaha_request_action.cc:272] Request: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.985878 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:23:08.988717 update_engine[1469]: I20250113 20:23:08.988269 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:23:08.989535 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 20:23:08.989946 update_engine[1469]: I20250113 20:23:08.988705 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:23:08.989946 update_engine[1469]: E20250113 20:23:08.989708 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:23:08.989946 update_engine[1469]: I20250113 20:23:08.989763 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 20:23:18.895032 update_engine[1469]: I20250113 20:23:18.894793 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:23:18.895458 update_engine[1469]: I20250113 20:23:18.895419 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:23:18.895881 update_engine[1469]: I20250113 20:23:18.895794 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:23:18.896403 update_engine[1469]: E20250113 20:23:18.896237 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:23:18.896403 update_engine[1469]: I20250113 20:23:18.896365 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 20:23:28.893362 update_engine[1469]: I20250113 20:23:28.893250 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:23:28.894153 update_engine[1469]: I20250113 20:23:28.893538 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:23:28.894153 update_engine[1469]: I20250113 20:23:28.893887 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:23:28.894501 update_engine[1469]: E20250113 20:23:28.894412 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:23:28.894545 update_engine[1469]: I20250113 20:23:28.894497 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 20:23:38.886991 update_engine[1469]: I20250113 20:23:38.886809 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:23:38.887693 update_engine[1469]: I20250113 20:23:38.887182 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:23:38.887693 update_engine[1469]: I20250113 20:23:38.887528 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:23:38.888917 update_engine[1469]: E20250113 20:23:38.888068 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888157 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888175 1469 omaha_request_action.cc:617] Omaha request response: Jan 13 20:23:38.888917 update_engine[1469]: E20250113 20:23:38.888317 1469 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888346 1469 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888357 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888365 1469 update_attempter.cc:306] Processing Done. Jan 13 20:23:38.888917 update_engine[1469]: E20250113 20:23:38.888384 1469 update_attempter.cc:619] Update failed. Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888394 1469 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888403 1469 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888413 1469 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888509 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888541 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:23:38.888917 update_engine[1469]: I20250113 20:23:38.888551 1469 omaha_request_action.cc:272] Request: Jan 13 20:23:38.888917 update_engine[1469]: Jan 13 20:23:38.888917 update_engine[1469]: Jan 13 20:23:38.888917 update_engine[1469]: Jan 13 20:23:38.889781 update_engine[1469]: Jan 13 20:23:38.889781 update_engine[1469]: Jan 13 20:23:38.889781 update_engine[1469]: Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.888561 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.888779 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889011 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:23:38.889781 update_engine[1469]: E20250113 20:23:38.889442 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889488 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889496 1469 omaha_request_action.cc:617] Omaha request response: Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889503 1469 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889508 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889513 1469 update_attempter.cc:306] Processing Done. Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889519 1469 update_attempter.cc:310] Error event sent. Jan 13 20:23:38.889781 update_engine[1469]: I20250113 20:23:38.889528 1469 update_check_scheduler.cc:74] Next update check in 47m9s Jan 13 20:23:38.890118 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 20:23:38.890118 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 20:24:15.970595 systemd[1]: Started sshd@7-138.199.153.206:22-147.75.109.163:45634.service - OpenSSH per-connection server daemon (147.75.109.163:45634). Jan 13 20:24:16.971841 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 45634 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:16.974498 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:16.980434 systemd-logind[1466]: New session 8 of user core. Jan 13 20:24:16.991514 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:24:17.772788 sshd[4235]: Connection closed by 147.75.109.163 port 45634 Jan 13 20:24:17.772655 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:17.777162 systemd[1]: sshd@7-138.199.153.206:22-147.75.109.163:45634.service: Deactivated successfully. Jan 13 20:24:17.781302 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:24:17.784364 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:24:17.785580 systemd-logind[1466]: Removed session 8. Jan 13 20:24:22.945709 systemd[1]: Started sshd@8-138.199.153.206:22-147.75.109.163:60564.service - OpenSSH per-connection server daemon (147.75.109.163:60564). Jan 13 20:24:23.932913 sshd[4248]: Accepted publickey for core from 147.75.109.163 port 60564 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:23.935015 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:23.939916 systemd-logind[1466]: New session 9 of user core. Jan 13 20:24:23.946529 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:24:24.683225 sshd[4250]: Connection closed by 147.75.109.163 port 60564 Jan 13 20:24:24.683894 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:24.688184 systemd[1]: sshd@8-138.199.153.206:22-147.75.109.163:60564.service: Deactivated successfully. Jan 13 20:24:24.690976 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:24:24.694196 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:24:24.695859 systemd-logind[1466]: Removed session 9. Jan 13 20:24:29.854554 systemd[1]: Started sshd@9-138.199.153.206:22-147.75.109.163:40224.service - OpenSSH per-connection server daemon (147.75.109.163:40224). Jan 13 20:24:30.828066 sshd[4265]: Accepted publickey for core from 147.75.109.163 port 40224 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:30.830398 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:30.836989 systemd-logind[1466]: New session 10 of user core. Jan 13 20:24:30.840411 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:24:31.577767 sshd[4267]: Connection closed by 147.75.109.163 port 40224 Jan 13 20:24:31.577512 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:31.583950 systemd[1]: sshd@9-138.199.153.206:22-147.75.109.163:40224.service: Deactivated successfully. Jan 13 20:24:31.587166 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:24:31.588404 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:24:31.589836 systemd-logind[1466]: Removed session 10. Jan 13 20:24:31.758843 systemd[1]: Started sshd@10-138.199.153.206:22-147.75.109.163:40238.service - OpenSSH per-connection server daemon (147.75.109.163:40238). Jan 13 20:24:32.742787 sshd[4279]: Accepted publickey for core from 147.75.109.163 port 40238 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:32.745288 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:32.751144 systemd-logind[1466]: New session 11 of user core. Jan 13 20:24:32.757643 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:24:33.541066 sshd[4281]: Connection closed by 147.75.109.163 port 40238 Jan 13 20:24:33.542358 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:33.549439 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:24:33.549681 systemd[1]: sshd@10-138.199.153.206:22-147.75.109.163:40238.service: Deactivated successfully. Jan 13 20:24:33.553796 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:24:33.558049 systemd-logind[1466]: Removed session 11. Jan 13 20:24:33.730608 systemd[1]: Started sshd@11-138.199.153.206:22-147.75.109.163:40246.service - OpenSSH per-connection server daemon (147.75.109.163:40246). Jan 13 20:24:34.726144 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 40246 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:34.728057 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:34.734133 systemd-logind[1466]: New session 12 of user core. Jan 13 20:24:34.747680 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:24:35.493274 sshd[4292]: Connection closed by 147.75.109.163 port 40246 Jan 13 20:24:35.494050 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:35.500972 systemd[1]: sshd@11-138.199.153.206:22-147.75.109.163:40246.service: Deactivated successfully. Jan 13 20:24:35.504903 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:24:35.506937 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:24:35.509544 systemd-logind[1466]: Removed session 12. Jan 13 20:24:40.673949 systemd[1]: Started sshd@12-138.199.153.206:22-147.75.109.163:51108.service - OpenSSH per-connection server daemon (147.75.109.163:51108). Jan 13 20:24:41.677790 sshd[4304]: Accepted publickey for core from 147.75.109.163 port 51108 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:41.679852 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:41.685886 systemd-logind[1466]: New session 13 of user core. Jan 13 20:24:41.691542 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:24:42.444527 sshd[4306]: Connection closed by 147.75.109.163 port 51108 Jan 13 20:24:42.445354 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:42.452191 systemd[1]: sshd@12-138.199.153.206:22-147.75.109.163:51108.service: Deactivated successfully. Jan 13 20:24:42.454778 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:24:42.458190 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:24:42.461458 systemd-logind[1466]: Removed session 13. Jan 13 20:24:42.623687 systemd[1]: Started sshd@13-138.199.153.206:22-147.75.109.163:51110.service - OpenSSH per-connection server daemon (147.75.109.163:51110). Jan 13 20:24:43.609453 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 51110 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:43.611507 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:43.616974 systemd-logind[1466]: New session 14 of user core. Jan 13 20:24:43.620485 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:24:44.415704 sshd[4319]: Connection closed by 147.75.109.163 port 51110 Jan 13 20:24:44.416597 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:44.421440 systemd[1]: sshd@13-138.199.153.206:22-147.75.109.163:51110.service: Deactivated successfully. Jan 13 20:24:44.423106 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:24:44.424813 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:24:44.426120 systemd-logind[1466]: Removed session 14. Jan 13 20:24:44.590545 systemd[1]: Started sshd@14-138.199.153.206:22-147.75.109.163:51114.service - OpenSSH per-connection server daemon (147.75.109.163:51114). Jan 13 20:24:45.576899 sshd[4330]: Accepted publickey for core from 147.75.109.163 port 51114 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:45.578991 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:45.583652 systemd-logind[1466]: New session 15 of user core. Jan 13 20:24:45.593443 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:24:47.824529 sshd[4332]: Connection closed by 147.75.109.163 port 51114 Jan 13 20:24:47.825294 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:47.833303 systemd[1]: sshd@14-138.199.153.206:22-147.75.109.163:51114.service: Deactivated successfully. Jan 13 20:24:47.839197 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:24:47.841093 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:24:47.842922 systemd-logind[1466]: Removed session 15. Jan 13 20:24:48.005533 systemd[1]: Started sshd@15-138.199.153.206:22-147.75.109.163:55558.service - OpenSSH per-connection server daemon (147.75.109.163:55558). Jan 13 20:24:48.998128 sshd[4348]: Accepted publickey for core from 147.75.109.163 port 55558 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:49.000304 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:49.007259 systemd-logind[1466]: New session 16 of user core. Jan 13 20:24:49.015540 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:24:49.883793 sshd[4350]: Connection closed by 147.75.109.163 port 55558 Jan 13 20:24:49.882946 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:49.887966 systemd[1]: sshd@15-138.199.153.206:22-147.75.109.163:55558.service: Deactivated successfully. Jan 13 20:24:49.891081 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:24:49.892302 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:24:49.893645 systemd-logind[1466]: Removed session 16. Jan 13 20:24:50.049578 systemd[1]: Started sshd@16-138.199.153.206:22-147.75.109.163:55568.service - OpenSSH per-connection server daemon (147.75.109.163:55568). Jan 13 20:24:51.036503 sshd[4359]: Accepted publickey for core from 147.75.109.163 port 55568 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:51.040045 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:51.047310 systemd-logind[1466]: New session 17 of user core. Jan 13 20:24:51.051557 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:24:51.782410 sshd[4361]: Connection closed by 147.75.109.163 port 55568 Jan 13 20:24:51.783329 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:51.790406 systemd[1]: sshd@16-138.199.153.206:22-147.75.109.163:55568.service: Deactivated successfully. Jan 13 20:24:51.795226 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:24:51.796297 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:24:51.797590 systemd-logind[1466]: Removed session 17. Jan 13 20:24:56.966817 systemd[1]: Started sshd@17-138.199.153.206:22-147.75.109.163:55580.service - OpenSSH per-connection server daemon (147.75.109.163:55580). Jan 13 20:24:57.961783 sshd[4375]: Accepted publickey for core from 147.75.109.163 port 55580 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:57.963815 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:57.968356 systemd-logind[1466]: New session 18 of user core. Jan 13 20:24:57.975501 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:24:58.724066 sshd[4377]: Connection closed by 147.75.109.163 port 55580 Jan 13 20:24:58.724640 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:58.729576 systemd[1]: sshd@17-138.199.153.206:22-147.75.109.163:55580.service: Deactivated successfully. Jan 13 20:24:58.733583 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:24:58.734913 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:24:58.735783 systemd-logind[1466]: Removed session 18. Jan 13 20:25:03.901742 systemd[1]: Started sshd@18-138.199.153.206:22-147.75.109.163:56424.service - OpenSSH per-connection server daemon (147.75.109.163:56424). Jan 13 20:25:04.900075 sshd[4389]: Accepted publickey for core from 147.75.109.163 port 56424 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:04.902234 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:04.908396 systemd-logind[1466]: New session 19 of user core. Jan 13 20:25:04.914522 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:25:05.663839 sshd[4391]: Connection closed by 147.75.109.163 port 56424 Jan 13 20:25:05.664589 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:05.670643 systemd[1]: sshd@18-138.199.153.206:22-147.75.109.163:56424.service: Deactivated successfully. Jan 13 20:25:05.672919 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:25:05.675197 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:25:05.677864 systemd-logind[1466]: Removed session 19. Jan 13 20:25:05.839772 systemd[1]: Started sshd@19-138.199.153.206:22-147.75.109.163:56426.service - OpenSSH per-connection server daemon (147.75.109.163:56426). Jan 13 20:25:06.825984 sshd[4401]: Accepted publickey for core from 147.75.109.163 port 56426 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:06.828259 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:06.833964 systemd-logind[1466]: New session 20 of user core. Jan 13 20:25:06.839623 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:25:09.804325 containerd[1499]: time="2025-01-13T20:25:09.804026440Z" level=info msg="StopContainer for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" with timeout 30 (s)" Jan 13 20:25:09.808074 systemd[1]: run-containerd-runc-k8s.io-adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b-runc.YoGANR.mount: Deactivated successfully. Jan 13 20:25:09.810335 containerd[1499]: time="2025-01-13T20:25:09.809348200Z" level=info msg="Stop container \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" with signal terminated" Jan 13 20:25:09.826113 systemd[1]: cri-containerd-1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c.scope: Deactivated successfully. Jan 13 20:25:09.834814 containerd[1499]: time="2025-01-13T20:25:09.834604268Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:25:09.849248 containerd[1499]: time="2025-01-13T20:25:09.849019901Z" level=info msg="StopContainer for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" with timeout 2 (s)" Jan 13 20:25:09.850011 containerd[1499]: time="2025-01-13T20:25:09.849700726Z" level=info msg="Stop container \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" with signal terminated" Jan 13 20:25:09.861605 systemd-networkd[1386]: lxc_health: Link DOWN Jan 13 20:25:09.861620 systemd-networkd[1386]: lxc_health: Lost carrier Jan 13 20:25:09.866422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c-rootfs.mount: Deactivated successfully. Jan 13 20:25:09.884200 systemd[1]: cri-containerd-adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b.scope: Deactivated successfully. Jan 13 20:25:09.885037 systemd[1]: cri-containerd-adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b.scope: Consumed 8.006s CPU time. Jan 13 20:25:09.887048 containerd[1499]: time="2025-01-13T20:25:09.886850564Z" level=info msg="shim disconnected" id=1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c namespace=k8s.io Jan 13 20:25:09.887618 containerd[1499]: time="2025-01-13T20:25:09.887327434Z" level=warning msg="cleaning up after shim disconnected" id=1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c namespace=k8s.io Jan 13 20:25:09.887618 containerd[1499]: time="2025-01-13T20:25:09.887367233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:09.918382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b-rootfs.mount: Deactivated successfully. Jan 13 20:25:09.923773 containerd[1499]: time="2025-01-13T20:25:09.923551693Z" level=info msg="StopContainer for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" returns successfully" Jan 13 20:25:09.926503 containerd[1499]: time="2025-01-13T20:25:09.926277991Z" level=info msg="shim disconnected" id=adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b namespace=k8s.io Jan 13 20:25:09.926503 containerd[1499]: time="2025-01-13T20:25:09.926331310Z" level=warning msg="cleaning up after shim disconnected" id=adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b namespace=k8s.io Jan 13 20:25:09.926503 containerd[1499]: time="2025-01-13T20:25:09.926339790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:09.927828 containerd[1499]: time="2025-01-13T20:25:09.927540483Z" level=info msg="StopPodSandbox for \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\"" Jan 13 20:25:09.928238 containerd[1499]: time="2025-01-13T20:25:09.928049551Z" level=info msg="Container to stop \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:09.933093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58-shm.mount: Deactivated successfully. Jan 13 20:25:09.944283 systemd[1]: cri-containerd-339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58.scope: Deactivated successfully. Jan 13 20:25:09.955790 containerd[1499]: time="2025-01-13T20:25:09.955599007Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:25:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:25:09.958910 containerd[1499]: time="2025-01-13T20:25:09.958847814Z" level=info msg="StopContainer for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" returns successfully" Jan 13 20:25:09.959534 containerd[1499]: time="2025-01-13T20:25:09.959423441Z" level=info msg="StopPodSandbox for \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\"" Jan 13 20:25:09.959712 containerd[1499]: time="2025-01-13T20:25:09.959635636Z" level=info msg="Container to stop \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:09.959712 containerd[1499]: time="2025-01-13T20:25:09.959652595Z" level=info msg="Container to stop \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:09.959712 containerd[1499]: time="2025-01-13T20:25:09.959662355Z" level=info msg="Container to stop \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:09.959882 containerd[1499]: time="2025-01-13T20:25:09.959805952Z" level=info msg="Container to stop \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:09.959882 containerd[1499]: time="2025-01-13T20:25:09.959823752Z" level=info msg="Container to stop \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:09.969077 systemd[1]: cri-containerd-6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d.scope: Deactivated successfully. Jan 13 20:25:09.989292 containerd[1499]: time="2025-01-13T20:25:09.989195726Z" level=info msg="shim disconnected" id=339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58 namespace=k8s.io Jan 13 20:25:09.989292 containerd[1499]: time="2025-01-13T20:25:09.989266565Z" level=warning msg="cleaning up after shim disconnected" id=339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58 namespace=k8s.io Jan 13 20:25:09.989292 containerd[1499]: time="2025-01-13T20:25:09.989274964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:10.009946 containerd[1499]: time="2025-01-13T20:25:10.009614423Z" level=info msg="shim disconnected" id=6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d namespace=k8s.io Jan 13 20:25:10.009946 containerd[1499]: time="2025-01-13T20:25:10.009686422Z" level=warning msg="cleaning up after shim disconnected" id=6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d namespace=k8s.io Jan 13 20:25:10.009946 containerd[1499]: time="2025-01-13T20:25:10.009697462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:10.014803 containerd[1499]: time="2025-01-13T20:25:10.014444194Z" level=info msg="TearDown network for sandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" successfully" Jan 13 20:25:10.014803 containerd[1499]: time="2025-01-13T20:25:10.014570751Z" level=info msg="StopPodSandbox for \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" returns successfully" Jan 13 20:25:10.033343 containerd[1499]: time="2025-01-13T20:25:10.033199928Z" level=info msg="TearDown network for sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" successfully" Jan 13 20:25:10.033343 containerd[1499]: time="2025-01-13T20:25:10.033245927Z" level=info msg="StopPodSandbox for \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" returns successfully" Jan 13 20:25:10.122130 kubelet[2812]: I0113 20:25:10.121935 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-cilium-config-path\") pod \"c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa\" (UID: \"c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa\") " Jan 13 20:25:10.122130 kubelet[2812]: I0113 20:25:10.122027 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kz7\" (UniqueName: \"kubernetes.io/projected/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-kube-api-access-w8kz7\") pod \"c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa\" (UID: \"c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa\") " Jan 13 20:25:10.127730 kubelet[2812]: I0113 20:25:10.127684 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-kube-api-access-w8kz7" (OuterVolumeSpecName: "kube-api-access-w8kz7") pod "c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa" (UID: "c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa"). InnerVolumeSpecName "kube-api-access-w8kz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:10.128331 kubelet[2812]: I0113 20:25:10.128183 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa" (UID: "c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:25:10.222510 kubelet[2812]: I0113 20:25:10.222447 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-hubble-tls\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225243 kubelet[2812]: I0113 20:25:10.222997 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-config-path\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225243 kubelet[2812]: I0113 20:25:10.223037 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmp6\" (UniqueName: \"kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-kube-api-access-7nmp6\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225243 kubelet[2812]: I0113 20:25:10.223063 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-kernel\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225243 kubelet[2812]: I0113 20:25:10.223089 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cni-path\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225243 kubelet[2812]: I0113 20:25:10.223112 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-net\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225243 kubelet[2812]: I0113 20:25:10.223147 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-xtables-lock\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225672 kubelet[2812]: I0113 20:25:10.223175 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-etc-cni-netd\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225672 kubelet[2812]: I0113 20:25:10.223224 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6092ab0d-be62-40fb-9b18-c219712a481a-clustermesh-secrets\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225672 kubelet[2812]: I0113 20:25:10.223250 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-hostproc\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225672 kubelet[2812]: I0113 20:25:10.223277 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-run\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225672 kubelet[2812]: I0113 20:25:10.223300 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-cgroup\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225672 kubelet[2812]: I0113 20:25:10.223329 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-lib-modules\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225926 kubelet[2812]: I0113 20:25:10.223350 2812 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-bpf-maps\") pod \"6092ab0d-be62-40fb-9b18-c219712a481a\" (UID: \"6092ab0d-be62-40fb-9b18-c219712a481a\") " Jan 13 20:25:10.225926 kubelet[2812]: I0113 20:25:10.223404 2812 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-cilium-config-path\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.225926 kubelet[2812]: I0113 20:25:10.223421 2812 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w8kz7\" (UniqueName: \"kubernetes.io/projected/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa-kube-api-access-w8kz7\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.225926 kubelet[2812]: I0113 20:25:10.223458 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.227446 kubelet[2812]: I0113 20:25:10.227402 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:10.227639 kubelet[2812]: I0113 20:25:10.227622 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.227710 kubelet[2812]: I0113 20:25:10.227698 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.227802 kubelet[2812]: I0113 20:25:10.227786 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cni-path" (OuterVolumeSpecName: "cni-path") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.227868 kubelet[2812]: I0113 20:25:10.227856 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.227930 kubelet[2812]: I0113 20:25:10.227919 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.227993 kubelet[2812]: I0113 20:25:10.227982 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.230312 kubelet[2812]: I0113 20:25:10.230200 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.230433 kubelet[2812]: I0113 20:25:10.230327 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.230794 kubelet[2812]: I0113 20:25:10.230767 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-hostproc" (OuterVolumeSpecName: "hostproc") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:10.230882 kubelet[2812]: I0113 20:25:10.230858 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6092ab0d-be62-40fb-9b18-c219712a481a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:25:10.231764 kubelet[2812]: I0113 20:25:10.231720 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:25:10.231997 kubelet[2812]: I0113 20:25:10.231947 2812 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-kube-api-access-7nmp6" (OuterVolumeSpecName: "kube-api-access-7nmp6") pod "6092ab0d-be62-40fb-9b18-c219712a481a" (UID: "6092ab0d-be62-40fb-9b18-c219712a481a"). InnerVolumeSpecName "kube-api-access-7nmp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:10.323855 kubelet[2812]: I0113 20:25:10.323692 2812 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6092ab0d-be62-40fb-9b18-c219712a481a-clustermesh-secrets\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.323855 kubelet[2812]: I0113 20:25:10.323788 2812 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-xtables-lock\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.323855 kubelet[2812]: I0113 20:25:10.323825 2812 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-etc-cni-netd\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.323855 kubelet[2812]: I0113 20:25:10.323849 2812 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-hostproc\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.323855 kubelet[2812]: I0113 20:25:10.323869 2812 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-run\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.323887 2812 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-cgroup\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.323948 2812 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-bpf-maps\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.323967 2812 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-lib-modules\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.323986 2812 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6092ab0d-be62-40fb-9b18-c219712a481a-cilium-config-path\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.324006 2812 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7nmp6\" (UniqueName: \"kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-kube-api-access-7nmp6\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.324026 2812 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6092ab0d-be62-40fb-9b18-c219712a481a-hubble-tls\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.324045 2812 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-cni-path\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324133 kubelet[2812]: I0113 20:25:10.324063 2812 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-kernel\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.324451 kubelet[2812]: I0113 20:25:10.324081 2812 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6092ab0d-be62-40fb-9b18-c219712a481a-host-proc-sys-net\") on node \"ci-4152-2-0-d-1c931fd560\" DevicePath \"\"" Jan 13 20:25:10.329032 kubelet[2812]: I0113 20:25:10.328805 2812 scope.go:117] "RemoveContainer" containerID="adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b" Jan 13 20:25:10.332873 containerd[1499]: time="2025-01-13T20:25:10.331878233Z" level=info msg="RemoveContainer for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\"" Jan 13 20:25:10.338246 systemd[1]: Removed slice kubepods-burstable-pod6092ab0d_be62_40fb_9b18_c219712a481a.slice - libcontainer container kubepods-burstable-pod6092ab0d_be62_40fb_9b18_c219712a481a.slice. Jan 13 20:25:10.338367 systemd[1]: kubepods-burstable-pod6092ab0d_be62_40fb_9b18_c219712a481a.slice: Consumed 8.099s CPU time. Jan 13 20:25:10.341219 containerd[1499]: time="2025-01-13T20:25:10.340858069Z" level=info msg="RemoveContainer for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" returns successfully" Jan 13 20:25:10.341352 kubelet[2812]: I0113 20:25:10.341288 2812 scope.go:117] "RemoveContainer" containerID="7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1" Jan 13 20:25:10.344252 containerd[1499]: time="2025-01-13T20:25:10.343853921Z" level=info msg="RemoveContainer for \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\"" Jan 13 20:25:10.346776 systemd[1]: Removed slice kubepods-besteffort-podc5d96d57_bd3c_4987_a6e1_dc2d1179d4fa.slice - libcontainer container kubepods-besteffort-podc5d96d57_bd3c_4987_a6e1_dc2d1179d4fa.slice. Jan 13 20:25:10.352949 containerd[1499]: time="2025-01-13T20:25:10.352199052Z" level=info msg="RemoveContainer for \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\" returns successfully" Jan 13 20:25:10.353776 kubelet[2812]: I0113 20:25:10.353366 2812 scope.go:117] "RemoveContainer" containerID="4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c" Jan 13 20:25:10.355280 containerd[1499]: time="2025-01-13T20:25:10.354889991Z" level=info msg="RemoveContainer for \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\"" Jan 13 20:25:10.360104 containerd[1499]: time="2025-01-13T20:25:10.359940956Z" level=info msg="RemoveContainer for \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\" returns successfully" Jan 13 20:25:10.360731 kubelet[2812]: I0113 20:25:10.360607 2812 scope.go:117] "RemoveContainer" containerID="98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9" Jan 13 20:25:10.364024 containerd[1499]: time="2025-01-13T20:25:10.363974625Z" level=info msg="RemoveContainer for \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\"" Jan 13 20:25:10.370902 containerd[1499]: time="2025-01-13T20:25:10.370841829Z" level=info msg="RemoveContainer for \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\" returns successfully" Jan 13 20:25:10.371395 kubelet[2812]: I0113 20:25:10.371194 2812 scope.go:117] "RemoveContainer" containerID="ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f" Jan 13 20:25:10.374908 containerd[1499]: time="2025-01-13T20:25:10.374176713Z" level=info msg="RemoveContainer for \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\"" Jan 13 20:25:10.379910 containerd[1499]: time="2025-01-13T20:25:10.379866584Z" level=info msg="RemoveContainer for \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\" returns successfully" Jan 13 20:25:10.380320 kubelet[2812]: I0113 20:25:10.380265 2812 scope.go:117] "RemoveContainer" containerID="adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b" Jan 13 20:25:10.381611 containerd[1499]: time="2025-01-13T20:25:10.380655006Z" level=error msg="ContainerStatus for \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\": not found" Jan 13 20:25:10.381611 containerd[1499]: time="2025-01-13T20:25:10.381147075Z" level=error msg="ContainerStatus for \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\": not found" Jan 13 20:25:10.381722 kubelet[2812]: E0113 20:25:10.380811 2812 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\": not found" containerID="adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b" Jan 13 20:25:10.381722 kubelet[2812]: I0113 20:25:10.380845 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b"} err="failed to get container status \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\": rpc error: code = NotFound desc = an error occurred when try to find container \"adf52a9d81d1bb4508c229cefd3cf81c90435c73d66e2c3aa4e8c92ee230395b\": not found" Jan 13 20:25:10.381722 kubelet[2812]: I0113 20:25:10.380919 2812 scope.go:117] "RemoveContainer" containerID="7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1" Jan 13 20:25:10.382039 kubelet[2812]: E0113 20:25:10.382005 2812 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\": not found" containerID="7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1" Jan 13 20:25:10.382103 kubelet[2812]: I0113 20:25:10.382037 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1"} err="failed to get container status \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\": rpc error: code = NotFound desc = an error occurred when try to find container \"7abd449e1ed5e5f8bd3659a837780e519287ae92aac359c19e74c24dd118fbe1\": not found" Jan 13 20:25:10.382103 kubelet[2812]: I0113 20:25:10.382057 2812 scope.go:117] "RemoveContainer" containerID="4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c" Jan 13 20:25:10.383322 containerd[1499]: time="2025-01-13T20:25:10.383173429Z" level=error msg="ContainerStatus for \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\": not found" Jan 13 20:25:10.383949 kubelet[2812]: E0113 20:25:10.383465 2812 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\": not found" containerID="4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c" Jan 13 20:25:10.383949 kubelet[2812]: I0113 20:25:10.383858 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c"} err="failed to get container status \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f838d08f0d0887d0398edd41674d9ea34b3e777b22be5e89fbe0e12664b9a1c\": not found" Jan 13 20:25:10.383949 kubelet[2812]: I0113 20:25:10.383878 2812 scope.go:117] "RemoveContainer" containerID="98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9" Jan 13 20:25:10.384882 containerd[1499]: time="2025-01-13T20:25:10.384630556Z" level=error msg="ContainerStatus for \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\": not found" Jan 13 20:25:10.384954 kubelet[2812]: E0113 20:25:10.384766 2812 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\": not found" containerID="98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9" Jan 13 20:25:10.384954 kubelet[2812]: I0113 20:25:10.384789 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9"} err="failed to get container status \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"98ede64caff9cd400c4e20aeba20fe811c54a94d20689f515c259c51431e76e9\": not found" Jan 13 20:25:10.384954 kubelet[2812]: I0113 20:25:10.384809 2812 scope.go:117] "RemoveContainer" containerID="ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f" Jan 13 20:25:10.385029 containerd[1499]: time="2025-01-13T20:25:10.384964748Z" level=error msg="ContainerStatus for \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\": not found" Jan 13 20:25:10.385218 kubelet[2812]: E0113 20:25:10.385177 2812 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\": not found" containerID="ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f" Jan 13 20:25:10.385358 kubelet[2812]: I0113 20:25:10.385197 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f"} err="failed to get container status \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce903243f9db7fe9096c28282ac48d6f8731415fc3f5f6b0d3d96dcfdd711a2f\": not found" Jan 13 20:25:10.385358 kubelet[2812]: I0113 20:25:10.385308 2812 scope.go:117] "RemoveContainer" containerID="1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c" Jan 13 20:25:10.388057 containerd[1499]: time="2025-01-13T20:25:10.387924681Z" level=info msg="RemoveContainer for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\"" Jan 13 20:25:10.391663 containerd[1499]: time="2025-01-13T20:25:10.391597558Z" level=info msg="RemoveContainer for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" returns successfully" Jan 13 20:25:10.392132 kubelet[2812]: I0113 20:25:10.392062 2812 scope.go:117] "RemoveContainer" containerID="1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c" Jan 13 20:25:10.392536 containerd[1499]: time="2025-01-13T20:25:10.392461018Z" level=error msg="ContainerStatus for \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\": not found" Jan 13 20:25:10.392815 kubelet[2812]: E0113 20:25:10.392624 2812 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\": not found" containerID="1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c" Jan 13 20:25:10.392815 kubelet[2812]: I0113 20:25:10.392649 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c"} err="failed to get container status \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1270686c592396518867c1503a4e349a183c9f79ce7a2d7abe110e492e72071c\": not found" Jan 13 20:25:10.795612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d-rootfs.mount: Deactivated successfully. Jan 13 20:25:10.795732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d-shm.mount: Deactivated successfully. Jan 13 20:25:10.795806 systemd[1]: var-lib-kubelet-pods-6092ab0d\x2dbe62\x2d40fb\x2d9b18\x2dc219712a481a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nmp6.mount: Deactivated successfully. Jan 13 20:25:10.795869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58-rootfs.mount: Deactivated successfully. Jan 13 20:25:10.795916 systemd[1]: var-lib-kubelet-pods-c5d96d57\x2dbd3c\x2d4987\x2da6e1\x2ddc2d1179d4fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw8kz7.mount: Deactivated successfully. Jan 13 20:25:10.795969 systemd[1]: var-lib-kubelet-pods-6092ab0d\x2dbe62\x2d40fb\x2d9b18\x2dc219712a481a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:25:10.796021 systemd[1]: var-lib-kubelet-pods-6092ab0d\x2dbe62\x2d40fb\x2d9b18\x2dc219712a481a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:25:11.386001 kubelet[2812]: I0113 20:25:11.385954 2812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" path="/var/lib/kubelet/pods/6092ab0d-be62-40fb-9b18-c219712a481a/volumes" Jan 13 20:25:11.386736 kubelet[2812]: I0113 20:25:11.386678 2812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa" path="/var/lib/kubelet/pods/c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa/volumes" Jan 13 20:25:11.880365 sshd[4403]: Connection closed by 147.75.109.163 port 56426 Jan 13 20:25:11.881179 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:11.886635 systemd[1]: sshd@19-138.199.153.206:22-147.75.109.163:56426.service: Deactivated successfully. Jan 13 20:25:11.888866 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:25:11.889073 systemd[1]: session-20.scope: Consumed 1.777s CPU time. Jan 13 20:25:11.889766 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:25:11.891866 systemd-logind[1466]: Removed session 20. Jan 13 20:25:12.059676 systemd[1]: Started sshd@20-138.199.153.206:22-147.75.109.163:59284.service - OpenSSH per-connection server daemon (147.75.109.163:59284). Jan 13 20:25:12.578843 kubelet[2812]: E0113 20:25:12.578751 2812 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:25:13.053330 sshd[4569]: Accepted publickey for core from 147.75.109.163 port 59284 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:13.055541 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:13.062737 systemd-logind[1466]: New session 21 of user core. Jan 13 20:25:13.069577 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:25:14.510686 kubelet[2812]: I0113 20:25:14.510400 2812 topology_manager.go:215] "Topology Admit Handler" podUID="2e3f904e-3e91-41a6-88b1-a2b50b0b1adf" podNamespace="kube-system" podName="cilium-gtfrf" Jan 13 20:25:14.510686 kubelet[2812]: E0113 20:25:14.510473 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" containerName="mount-cgroup" Jan 13 20:25:14.510686 kubelet[2812]: E0113 20:25:14.510484 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" containerName="apply-sysctl-overwrites" Jan 13 20:25:14.510686 kubelet[2812]: E0113 20:25:14.510491 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" containerName="mount-bpf-fs" Jan 13 20:25:14.510686 kubelet[2812]: E0113 20:25:14.510496 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" containerName="clean-cilium-state" Jan 13 20:25:14.510686 kubelet[2812]: E0113 20:25:14.510501 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" containerName="cilium-agent" Jan 13 20:25:14.510686 kubelet[2812]: E0113 20:25:14.510508 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa" containerName="cilium-operator" Jan 13 20:25:14.510686 kubelet[2812]: I0113 20:25:14.510526 2812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d96d57-bd3c-4987-a6e1-dc2d1179d4fa" containerName="cilium-operator" Jan 13 20:25:14.510686 kubelet[2812]: I0113 20:25:14.510532 2812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6092ab0d-be62-40fb-9b18-c219712a481a" containerName="cilium-agent" Jan 13 20:25:14.520665 systemd[1]: Created slice kubepods-burstable-pod2e3f904e_3e91_41a6_88b1_a2b50b0b1adf.slice - libcontainer container kubepods-burstable-pod2e3f904e_3e91_41a6_88b1_a2b50b0b1adf.slice. Jan 13 20:25:14.621260 kubelet[2812]: I0113 20:25:14.619052 2812 setters.go:580] "Node became not ready" node="ci-4152-2-0-d-1c931fd560" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:25:14Z","lastTransitionTime":"2025-01-13T20:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:25:14.647121 sshd[4571]: Connection closed by 147.75.109.163 port 59284 Jan 13 20:25:14.648879 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:14.651939 kubelet[2812]: I0113 20:25:14.651488 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-cilium-cgroup\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.651939 kubelet[2812]: I0113 20:25:14.651540 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-clustermesh-secrets\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.651939 kubelet[2812]: I0113 20:25:14.651560 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-hostproc\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.651939 kubelet[2812]: I0113 20:25:14.651580 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-xtables-lock\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.651939 kubelet[2812]: I0113 20:25:14.651599 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-etc-cni-netd\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.651939 kubelet[2812]: I0113 20:25:14.651617 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-lib-modules\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652254 kubelet[2812]: I0113 20:25:14.651637 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-cni-path\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652254 kubelet[2812]: I0113 20:25:14.651656 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-cilium-config-path\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652254 kubelet[2812]: I0113 20:25:14.651675 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-host-proc-sys-kernel\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652254 kubelet[2812]: I0113 20:25:14.651692 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-bpf-maps\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652254 kubelet[2812]: I0113 20:25:14.651710 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-cilium-ipsec-secrets\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652254 kubelet[2812]: I0113 20:25:14.651728 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-host-proc-sys-net\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652475 kubelet[2812]: I0113 20:25:14.651748 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-hubble-tls\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652475 kubelet[2812]: I0113 20:25:14.651769 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhp8m\" (UniqueName: \"kubernetes.io/projected/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-kube-api-access-hhp8m\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.652475 kubelet[2812]: I0113 20:25:14.651787 2812 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e3f904e-3e91-41a6-88b1-a2b50b0b1adf-cilium-run\") pod \"cilium-gtfrf\" (UID: \"2e3f904e-3e91-41a6-88b1-a2b50b0b1adf\") " pod="kube-system/cilium-gtfrf" Jan 13 20:25:14.653626 systemd[1]: sshd@20-138.199.153.206:22-147.75.109.163:59284.service: Deactivated successfully. Jan 13 20:25:14.655922 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:25:14.657166 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:25:14.658968 systemd-logind[1466]: Removed session 21. Jan 13 20:25:14.821652 systemd[1]: Started sshd@21-138.199.153.206:22-147.75.109.163:59294.service - OpenSSH per-connection server daemon (147.75.109.163:59294). Jan 13 20:25:14.833919 containerd[1499]: time="2025-01-13T20:25:14.833092728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtfrf,Uid:2e3f904e-3e91-41a6-88b1-a2b50b0b1adf,Namespace:kube-system,Attempt:0,}" Jan 13 20:25:14.863200 containerd[1499]: time="2025-01-13T20:25:14.863050724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:14.863200 containerd[1499]: time="2025-01-13T20:25:14.863153442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:14.863200 containerd[1499]: time="2025-01-13T20:25:14.863170721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:14.863573 containerd[1499]: time="2025-01-13T20:25:14.863436275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:14.881454 systemd[1]: Started cri-containerd-7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686.scope - libcontainer container 7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686. Jan 13 20:25:14.906741 containerd[1499]: time="2025-01-13T20:25:14.906618010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtfrf,Uid:2e3f904e-3e91-41a6-88b1-a2b50b0b1adf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\"" Jan 13 20:25:14.911954 containerd[1499]: time="2025-01-13T20:25:14.911708854Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:25:14.923037 containerd[1499]: time="2025-01-13T20:25:14.922987957Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f\"" Jan 13 20:25:14.925064 containerd[1499]: time="2025-01-13T20:25:14.924007013Z" level=info msg="StartContainer for \"e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f\"" Jan 13 20:25:14.948465 systemd[1]: Started cri-containerd-e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f.scope - libcontainer container e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f. Jan 13 20:25:14.974367 containerd[1499]: time="2025-01-13T20:25:14.974266506Z" level=info msg="StartContainer for \"e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f\" returns successfully" Jan 13 20:25:14.988007 systemd[1]: cri-containerd-e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f.scope: Deactivated successfully. Jan 13 20:25:15.027121 containerd[1499]: time="2025-01-13T20:25:15.026848386Z" level=info msg="shim disconnected" id=e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f namespace=k8s.io Jan 13 20:25:15.027121 containerd[1499]: time="2025-01-13T20:25:15.026905584Z" level=warning msg="cleaning up after shim disconnected" id=e90cd3e39bdacac2213d79b80488a446900af55f23bb1493c8f993cb1dadca4f namespace=k8s.io Jan 13 20:25:15.027121 containerd[1499]: time="2025-01-13T20:25:15.026913344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:15.356516 containerd[1499]: time="2025-01-13T20:25:15.356303338Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:25:15.388680 containerd[1499]: time="2025-01-13T20:25:15.388492362Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c\"" Jan 13 20:25:15.390242 containerd[1499]: time="2025-01-13T20:25:15.389444820Z" level=info msg="StartContainer for \"7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c\"" Jan 13 20:25:15.415485 systemd[1]: Started cri-containerd-7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c.scope - libcontainer container 7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c. Jan 13 20:25:15.446876 containerd[1499]: time="2025-01-13T20:25:15.446680952Z" level=info msg="StartContainer for \"7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c\" returns successfully" Jan 13 20:25:15.455867 systemd[1]: cri-containerd-7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c.scope: Deactivated successfully. Jan 13 20:25:15.489663 containerd[1499]: time="2025-01-13T20:25:15.489384657Z" level=info msg="shim disconnected" id=7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c namespace=k8s.io Jan 13 20:25:15.489663 containerd[1499]: time="2025-01-13T20:25:15.489490774Z" level=warning msg="cleaning up after shim disconnected" id=7d8bac33c75cf587517765955239a9918e04740fd6873533691618befdf3221c namespace=k8s.io Jan 13 20:25:15.489663 containerd[1499]: time="2025-01-13T20:25:15.489507574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:15.805196 sshd[4586]: Accepted publickey for core from 147.75.109.163 port 59294 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:15.806951 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:15.813778 systemd-logind[1466]: New session 22 of user core. Jan 13 20:25:15.819452 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:25:16.363031 containerd[1499]: time="2025-01-13T20:25:16.362807567Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:25:16.383200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942319186.mount: Deactivated successfully. Jan 13 20:25:16.386427 containerd[1499]: time="2025-01-13T20:25:16.386381068Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6\"" Jan 13 20:25:16.388008 containerd[1499]: time="2025-01-13T20:25:16.387960992Z" level=info msg="StartContainer for \"6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6\"" Jan 13 20:25:16.423523 systemd[1]: Started cri-containerd-6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6.scope - libcontainer container 6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6. Jan 13 20:25:16.456886 containerd[1499]: time="2025-01-13T20:25:16.456834936Z" level=info msg="StartContainer for \"6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6\" returns successfully" Jan 13 20:25:16.457577 systemd[1]: cri-containerd-6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6.scope: Deactivated successfully. Jan 13 20:25:16.480483 sshd[4757]: Connection closed by 147.75.109.163 port 59294 Jan 13 20:25:16.481340 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:16.487984 systemd[1]: sshd@21-138.199.153.206:22-147.75.109.163:59294.service: Deactivated successfully. Jan 13 20:25:16.491420 containerd[1499]: time="2025-01-13T20:25:16.491177190Z" level=info msg="shim disconnected" id=6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6 namespace=k8s.io Jan 13 20:25:16.491692 containerd[1499]: time="2025-01-13T20:25:16.491587940Z" level=warning msg="cleaning up after shim disconnected" id=6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6 namespace=k8s.io Jan 13 20:25:16.491692 containerd[1499]: time="2025-01-13T20:25:16.491607700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:16.491782 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:25:16.492978 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:25:16.495257 systemd-logind[1466]: Removed session 22. Jan 13 20:25:16.655675 systemd[1]: Started sshd@22-138.199.153.206:22-147.75.109.163:59310.service - OpenSSH per-connection server daemon (147.75.109.163:59310). Jan 13 20:25:16.761278 systemd[1]: run-containerd-runc-k8s.io-6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6-runc.cl8MQX.mount: Deactivated successfully. Jan 13 20:25:16.761623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c270f70ff00a3e4f6e9c9b6bbff19e846819632235fc628a97d617725ab88c6-rootfs.mount: Deactivated successfully. Jan 13 20:25:17.368345 containerd[1499]: time="2025-01-13T20:25:17.368292908Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:25:17.397405 containerd[1499]: time="2025-01-13T20:25:17.397089689Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d\"" Jan 13 20:25:17.401463 containerd[1499]: time="2025-01-13T20:25:17.401427069Z" level=info msg="StartContainer for \"51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d\"" Jan 13 20:25:17.435431 systemd[1]: Started cri-containerd-51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d.scope - libcontainer container 51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d. Jan 13 20:25:17.460639 systemd[1]: cri-containerd-51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d.scope: Deactivated successfully. Jan 13 20:25:17.462917 containerd[1499]: time="2025-01-13T20:25:17.461251298Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e3f904e_3e91_41a6_88b1_a2b50b0b1adf.slice/cri-containerd-51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d.scope/memory.events\": no such file or directory" Jan 13 20:25:17.465517 containerd[1499]: time="2025-01-13T20:25:17.465475802Z" level=info msg="StartContainer for \"51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d\" returns successfully" Jan 13 20:25:17.495609 containerd[1499]: time="2025-01-13T20:25:17.495352677Z" level=info msg="shim disconnected" id=51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d namespace=k8s.io Jan 13 20:25:17.495609 containerd[1499]: time="2025-01-13T20:25:17.495427595Z" level=warning msg="cleaning up after shim disconnected" id=51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d namespace=k8s.io Jan 13 20:25:17.495609 containerd[1499]: time="2025-01-13T20:25:17.495438675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:17.580849 kubelet[2812]: E0113 20:25:17.580722 2812 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:25:17.644870 sshd[4820]: Accepted publickey for core from 147.75.109.163 port 59310 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:17.647253 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:17.653477 systemd-logind[1466]: New session 23 of user core. Jan 13 20:25:17.665482 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:25:17.760300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ccf906baeed462fcdb9fe104a08a8615864107e033cf4a7ee5da47732c740d-rootfs.mount: Deactivated successfully. Jan 13 20:25:18.384338 containerd[1499]: time="2025-01-13T20:25:18.383733270Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:25:18.405857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981943108.mount: Deactivated successfully. Jan 13 20:25:18.408762 containerd[1499]: time="2025-01-13T20:25:18.408625019Z" level=info msg="CreateContainer within sandbox \"7332b1a18232a23025f2fd35573e6bf73f204c3f887c93c398ed2c8cca84e686\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a\"" Jan 13 20:25:18.410006 containerd[1499]: time="2025-01-13T20:25:18.409678035Z" level=info msg="StartContainer for \"0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a\"" Jan 13 20:25:18.451573 systemd[1]: Started cri-containerd-0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a.scope - libcontainer container 0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a. Jan 13 20:25:18.489212 containerd[1499]: time="2025-01-13T20:25:18.489125172Z" level=info msg="StartContainer for \"0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a\" returns successfully" Jan 13 20:25:18.845344 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:25:19.403564 kubelet[2812]: I0113 20:25:19.403434 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gtfrf" podStartSLOduration=5.403405063 podStartE2EDuration="5.403405063s" podCreationTimestamp="2025-01-13 20:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:25:19.401351911 +0000 UTC m=+352.118782445" watchObservedRunningTime="2025-01-13 20:25:19.403405063 +0000 UTC m=+352.120835437" Jan 13 20:25:21.383285 kubelet[2812]: E0113 20:25:21.382032 2812 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-km7vh" podUID="6408dc2d-b38f-48b6-addd-58ca9a9d38f1" Jan 13 20:25:21.832625 systemd-networkd[1386]: lxc_health: Link UP Jan 13 20:25:21.838665 systemd-networkd[1386]: lxc_health: Gained carrier Jan 13 20:25:23.692447 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 13 20:25:24.704267 systemd[1]: run-containerd-runc-k8s.io-0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a-runc.jgLP2Y.mount: Deactivated successfully. Jan 13 20:25:27.434895 containerd[1499]: time="2025-01-13T20:25:27.434812456Z" level=info msg="StopPodSandbox for \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\"" Jan 13 20:25:27.435516 containerd[1499]: time="2025-01-13T20:25:27.434955013Z" level=info msg="TearDown network for sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" successfully" Jan 13 20:25:27.435516 containerd[1499]: time="2025-01-13T20:25:27.434970253Z" level=info msg="StopPodSandbox for \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" returns successfully" Jan 13 20:25:27.435939 containerd[1499]: time="2025-01-13T20:25:27.435843352Z" level=info msg="RemovePodSandbox for \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\"" Jan 13 20:25:27.436068 containerd[1499]: time="2025-01-13T20:25:27.435925190Z" level=info msg="Forcibly stopping sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\"" Jan 13 20:25:27.436309 containerd[1499]: time="2025-01-13T20:25:27.436254703Z" level=info msg="TearDown network for sandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" successfully" Jan 13 20:25:27.441074 containerd[1499]: time="2025-01-13T20:25:27.441007593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:27.441285 containerd[1499]: time="2025-01-13T20:25:27.441093471Z" level=info msg="RemovePodSandbox \"6d8c7ddde98713539d3ad8f1da826cc1307b35c498c1de6cbd3d851ed960639d\" returns successfully" Jan 13 20:25:27.441961 containerd[1499]: time="2025-01-13T20:25:27.441792134Z" level=info msg="StopPodSandbox for \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\"" Jan 13 20:25:27.441961 containerd[1499]: time="2025-01-13T20:25:27.441882772Z" level=info msg="TearDown network for sandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" successfully" Jan 13 20:25:27.441961 containerd[1499]: time="2025-01-13T20:25:27.441894492Z" level=info msg="StopPodSandbox for \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" returns successfully" Jan 13 20:25:27.442401 containerd[1499]: time="2025-01-13T20:25:27.442358201Z" level=info msg="RemovePodSandbox for \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\"" Jan 13 20:25:27.442401 containerd[1499]: time="2025-01-13T20:25:27.442388560Z" level=info msg="Forcibly stopping sandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\"" Jan 13 20:25:27.442508 containerd[1499]: time="2025-01-13T20:25:27.442449759Z" level=info msg="TearDown network for sandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" successfully" Jan 13 20:25:27.446366 containerd[1499]: time="2025-01-13T20:25:27.446297430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:27.446366 containerd[1499]: time="2025-01-13T20:25:27.446367388Z" level=info msg="RemovePodSandbox \"339c871fa399ea9a51bfedf5b55cff45850fda5f209e9b7d71458ddae7c74c58\" returns successfully" Jan 13 20:25:29.044069 systemd[1]: run-containerd-runc-k8s.io-0dc8ececf319f7a2e58d81476fa598272eff89e91e6ce28648baab9b78918f6a-runc.eHwjEZ.mount: Deactivated successfully. Jan 13 20:25:29.286077 sshd[4875]: Connection closed by 147.75.109.163 port 59310 Jan 13 20:25:29.287499 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:29.291746 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:25:29.292499 systemd[1]: sshd@22-138.199.153.206:22-147.75.109.163:59310.service: Deactivated successfully. Jan 13 20:25:29.295162 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:25:29.297990 systemd-logind[1466]: Removed session 23. Jan 13 20:25:45.000807 systemd[1]: cri-containerd-7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c.scope: Deactivated successfully. Jan 13 20:25:45.001173 systemd[1]: cri-containerd-7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c.scope: Consumed 6.815s CPU time, 24.3M memory peak, 0B memory swap peak. Jan 13 20:25:45.025405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c-rootfs.mount: Deactivated successfully. Jan 13 20:25:45.034694 containerd[1499]: time="2025-01-13T20:25:45.034595130Z" level=info msg="shim disconnected" id=7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c namespace=k8s.io Jan 13 20:25:45.034694 containerd[1499]: time="2025-01-13T20:25:45.034691419Z" level=warning msg="cleaning up after shim disconnected" id=7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c namespace=k8s.io Jan 13 20:25:45.035940 containerd[1499]: time="2025-01-13T20:25:45.034708781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:45.048466 containerd[1499]: time="2025-01-13T20:25:45.048411101Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:25:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:25:45.447118 kubelet[2812]: I0113 20:25:45.447070 2812 scope.go:117] "RemoveContainer" containerID="7cb2ee9c664e66346b5d64a3af38b40af21af57f6a93ace34d3060e99eca9d7c" Jan 13 20:25:45.450321 containerd[1499]: time="2025-01-13T20:25:45.450266475Z" level=info msg="CreateContainer within sandbox \"00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:25:45.456694 kubelet[2812]: E0113 20:25:45.456644 2812 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41092->10.0.0.2:2379: read: connection timed out" Jan 13 20:25:45.472374 containerd[1499]: time="2025-01-13T20:25:45.472001626Z" level=info msg="CreateContainer within sandbox \"00e9b91a20f29abd7ccd012870814e7402962728838783539497a4db4077d21f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1953cb488547fac79f10fa21e010b5f01b70704541be1003dcd4f6b98489d54a\"" Jan 13 20:25:45.473277 containerd[1499]: time="2025-01-13T20:25:45.472725533Z" level=info msg="StartContainer for \"1953cb488547fac79f10fa21e010b5f01b70704541be1003dcd4f6b98489d54a\"" Jan 13 20:25:45.513623 systemd[1]: Started cri-containerd-1953cb488547fac79f10fa21e010b5f01b70704541be1003dcd4f6b98489d54a.scope - libcontainer container 1953cb488547fac79f10fa21e010b5f01b70704541be1003dcd4f6b98489d54a. Jan 13 20:25:45.555247 containerd[1499]: time="2025-01-13T20:25:45.554406882Z" level=info msg="StartContainer for \"1953cb488547fac79f10fa21e010b5f01b70704541be1003dcd4f6b98489d54a\" returns successfully" Jan 13 20:25:49.550185 kubelet[2812]: E0113 20:25:49.549637 2812 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40894->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-0-d-1c931fd560.181a5a57b7c2c1a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-0-d-1c931fd560,UID:d02b0561f42e90f889301965039b715b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-d-1c931fd560,},FirstTimestamp:2025-01-13 20:25:39.096396197 +0000 UTC m=+371.813826571,LastTimestamp:2025-01-13 20:25:39.096396197 +0000 UTC m=+371.813826571,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-d-1c931fd560,}" Jan 13 20:25:51.294675 systemd[1]: cri-containerd-6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e.scope: Deactivated successfully. Jan 13 20:25:51.295428 systemd[1]: cri-containerd-6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e.scope: Consumed 2.540s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 13 20:25:51.321628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e-rootfs.mount: Deactivated successfully. Jan 13 20:25:51.330569 containerd[1499]: time="2025-01-13T20:25:51.330486568Z" level=info msg="shim disconnected" id=6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e namespace=k8s.io Jan 13 20:25:51.331296 containerd[1499]: time="2025-01-13T20:25:51.331255076Z" level=warning msg="cleaning up after shim disconnected" id=6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e namespace=k8s.io Jan 13 20:25:51.331349 containerd[1499]: time="2025-01-13T20:25:51.331294520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:51.470047 kubelet[2812]: I0113 20:25:51.469391 2812 scope.go:117] "RemoveContainer" containerID="6f613f78c9be6c3eb956ef16b63b522a236651cfd88bc4ab84203432ac9ddd0e" Jan 13 20:25:51.472492 containerd[1499]: time="2025-01-13T20:25:51.472430468Z" level=info msg="CreateContainer within sandbox \"8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:25:51.491470 containerd[1499]: time="2025-01-13T20:25:51.491416311Z" level=info msg="CreateContainer within sandbox \"8c5bdca1c9a0bb3c58018073cbae6785c17a44ae15e612895dd4aca7e46f5cbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"009026dfb4e1239528d37dd6a0ad0d9e11a8a4d49fb41bb6abfcf7322799eae8\"" Jan 13 20:25:51.492295 containerd[1499]: time="2025-01-13T20:25:51.492258626Z" level=info msg="StartContainer for \"009026dfb4e1239528d37dd6a0ad0d9e11a8a4d49fb41bb6abfcf7322799eae8\"" Jan 13 20:25:51.529659 systemd[1]: Started cri-containerd-009026dfb4e1239528d37dd6a0ad0d9e11a8a4d49fb41bb6abfcf7322799eae8.scope - libcontainer container 009026dfb4e1239528d37dd6a0ad0d9e11a8a4d49fb41bb6abfcf7322799eae8. Jan 13 20:25:51.580820 containerd[1499]: time="2025-01-13T20:25:51.580659781Z" level=info msg="StartContainer for \"009026dfb4e1239528d37dd6a0ad0d9e11a8a4d49fb41bb6abfcf7322799eae8\" returns successfully"