Jan 30 13:22:39.864306 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:22:39.864330 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:22:39.864340 kernel: KASLR enabled Jan 30 13:22:39.864346 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 30 13:22:39.864351 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 30 13:22:39.864357 kernel: random: crng init done Jan 30 13:22:39.864364 kernel: secureboot: Secure boot disabled Jan 30 13:22:39.864369 kernel: ACPI: Early table checksum verification disabled Jan 30 13:22:39.864375 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 30 13:22:39.868453 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:22:39.868490 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868496 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868502 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868522 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868530 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868544 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868551 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868557 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868564 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:39.868570 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:22:39.868576 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 13:22:39.868582 kernel: NUMA: Failed to initialise from firmware Jan 30 13:22:39.868588 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 13:22:39.868595 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jan 30 13:22:39.868601 kernel: Zone ranges: Jan 30 13:22:39.868609 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 13:22:39.868615 kernel: DMA32 empty Jan 30 13:22:39.868621 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 13:22:39.868627 kernel: Movable zone start for each node Jan 30 13:22:39.868633 kernel: Early memory node ranges Jan 30 13:22:39.868639 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 30 13:22:39.868645 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 30 13:22:39.868651 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 30 13:22:39.868658 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 30 13:22:39.868665 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 30 13:22:39.868671 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 30 13:22:39.868677 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 30 13:22:39.868684 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 30 13:22:39.868691 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 30 13:22:39.868697 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 13:22:39.868706 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 13:22:39.868712 kernel: psci: probing for conduit method from ACPI. Jan 30 13:22:39.868719 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:22:39.868727 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:22:39.868733 kernel: psci: Trusted OS migration not required Jan 30 13:22:39.868740 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:22:39.868746 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:22:39.868753 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:22:39.868759 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:22:39.868766 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:22:39.868773 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:22:39.868779 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:22:39.868786 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:22:39.868794 kernel: CPU features: detected: Spectre-v4 Jan 30 13:22:39.868800 kernel: CPU features: detected: Spectre-BHB Jan 30 13:22:39.868807 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:22:39.868813 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:22:39.868819 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:22:39.868826 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:22:39.868832 kernel: alternatives: applying boot alternatives Jan 30 13:22:39.868840 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:39.868847 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:22:39.868854 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:22:39.868860 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:39.868868 kernel: Fallback order for Node 0: 0 Jan 30 13:22:39.868875 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 13:22:39.868881 kernel: Policy zone: Normal Jan 30 13:22:39.868887 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:22:39.868894 kernel: software IO TLB: area num 2. Jan 30 13:22:39.868900 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 13:22:39.868907 kernel: Memory: 3882292K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 213708K reserved, 0K cma-reserved) Jan 30 13:22:39.868914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:22:39.868920 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:22:39.868927 kernel: rcu: RCU event tracing is enabled. Jan 30 13:22:39.868934 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:22:39.868941 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:22:39.868949 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:22:39.868956 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:22:39.868962 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:22:39.868969 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:22:39.868975 kernel: GICv3: 256 SPIs implemented Jan 30 13:22:39.868982 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:22:39.868988 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:22:39.868995 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:22:39.869001 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:22:39.869007 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:22:39.869014 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:22:39.869022 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:22:39.869029 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 13:22:39.869035 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 13:22:39.869042 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:22:39.869048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:39.869055 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:22:39.869061 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:22:39.869068 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:22:39.869074 kernel: Console: colour dummy device 80x25 Jan 30 13:22:39.869081 kernel: ACPI: Core revision 20230628 Jan 30 13:22:39.869088 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:22:39.869096 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:22:39.869103 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:22:39.869109 kernel: landlock: Up and running. Jan 30 13:22:39.869116 kernel: SELinux: Initializing. Jan 30 13:22:39.869122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:39.869129 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:39.869136 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:39.869143 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:39.869150 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:22:39.869158 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:22:39.869165 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:22:39.869171 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:22:39.869178 kernel: Remapping and enabling EFI services. Jan 30 13:22:39.869185 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:22:39.869191 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:22:39.869198 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:22:39.869205 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 13:22:39.869211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:39.869218 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:22:39.869226 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:22:39.869238 kernel: SMP: Total of 2 processors activated. Jan 30 13:22:39.869247 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:22:39.869254 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:22:39.869262 kernel: CPU features: detected: Common not Private translations Jan 30 13:22:39.869269 kernel: CPU features: detected: CRC32 instructions Jan 30 13:22:39.869276 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:22:39.869283 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:22:39.869291 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:22:39.869298 kernel: CPU features: detected: Privileged Access Never Jan 30 13:22:39.869305 kernel: CPU features: detected: RAS Extension Support Jan 30 13:22:39.869313 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:22:39.869320 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:22:39.869326 kernel: alternatives: applying system-wide alternatives Jan 30 13:22:39.869333 kernel: devtmpfs: initialized Jan 30 13:22:39.869341 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:22:39.869349 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:22:39.869356 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:22:39.869363 kernel: SMBIOS 3.0.0 present. Jan 30 13:22:39.869370 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 13:22:39.869377 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:22:39.869385 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:22:39.869392 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:22:39.869399 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:22:39.869406 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:22:39.869436 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 30 13:22:39.869444 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:22:39.869451 kernel: cpuidle: using governor menu Jan 30 13:22:39.869458 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:22:39.869465 kernel: ASID allocator initialised with 32768 entries Jan 30 13:22:39.869472 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:22:39.869479 kernel: Serial: AMBA PL011 UART driver Jan 30 13:22:39.869486 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:22:39.869493 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:22:39.869504 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:22:39.869520 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:22:39.869527 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:22:39.869534 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:22:39.869541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:22:39.869548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:22:39.869555 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:22:39.869562 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:22:39.869569 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:22:39.869579 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:22:39.869586 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:22:39.869597 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:22:39.869605 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:22:39.869612 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:22:39.869619 kernel: ACPI: Interpreter enabled Jan 30 13:22:39.869626 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:22:39.869633 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:22:39.869640 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:22:39.869649 kernel: printk: console [ttyAMA0] enabled Jan 30 13:22:39.869656 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:22:39.869843 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:22:39.869919 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:22:39.869983 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:22:39.870048 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:22:39.870111 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:22:39.870124 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:22:39.870131 kernel: PCI host bridge to bus 0000:00 Jan 30 13:22:39.870203 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:22:39.870263 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:22:39.870321 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:22:39.870388 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:22:39.872596 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:22:39.872712 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 13:22:39.872800 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 13:22:39.872899 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 13:22:39.873008 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.873096 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 13:22:39.873192 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.873285 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 13:22:39.873381 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.873663 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 13:22:39.873766 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.873853 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 13:22:39.873947 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.874031 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 13:22:39.874130 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.874215 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 13:22:39.874308 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.874394 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 13:22:39.876591 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.876710 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 13:22:39.876808 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:39.876902 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 13:22:39.877006 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 13:22:39.877096 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 30 13:22:39.877199 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 13:22:39.877290 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 13:22:39.877404 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:22:39.877556 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 13:22:39.877643 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 13:22:39.877769 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 13:22:39.877851 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 13:22:39.877921 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 13:22:39.877999 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 13:22:39.878075 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 13:22:39.878143 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 13:22:39.878218 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 13:22:39.878286 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 13:22:39.878353 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 13:22:39.880553 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 13:22:39.880701 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 13:22:39.880774 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 13:22:39.880850 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 13:22:39.880917 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 13:22:39.880985 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 13:22:39.881052 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 13:22:39.881126 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 13:22:39.881191 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 13:22:39.881255 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 13:22:39.881323 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 13:22:39.881388 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 13:22:39.881475 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 13:22:39.881598 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 13:22:39.881674 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 13:22:39.881738 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 13:22:39.881808 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 13:22:39.881872 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 13:22:39.881936 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 13:22:39.882004 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 13:22:39.882068 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 13:22:39.882133 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 13:22:39.882207 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 13:22:39.882271 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 13:22:39.882336 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 13:22:39.882404 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 13:22:39.883704 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 13:22:39.883780 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 13:22:39.883851 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 13:22:39.883924 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 13:22:39.883987 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 13:22:39.884056 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 13:22:39.884119 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 13:22:39.884183 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 13:22:39.884249 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 13:22:39.884313 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 13:22:39.884383 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 13:22:39.884476 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 13:22:39.884596 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 13:22:39.884673 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 13:22:39.884740 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 13:22:39.884805 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 13:22:39.884873 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 13:22:39.884942 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 13:22:39.885010 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 13:22:39.885075 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 13:22:39.885139 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 13:22:39.885202 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 13:22:39.885266 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 13:22:39.885329 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 13:22:39.885398 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 13:22:39.888559 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 13:22:39.888667 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 13:22:39.888740 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 13:22:39.888811 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 13:22:39.888876 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 13:22:39.888945 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 13:22:39.889019 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 13:22:39.889085 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 13:22:39.889150 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 13:22:39.889218 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 13:22:39.889283 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 13:22:39.889350 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 13:22:39.889489 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 13:22:39.889587 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 13:22:39.889668 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 13:22:39.889736 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 13:22:39.889801 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 13:22:39.889868 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 13:22:39.889932 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 13:22:39.889997 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 13:22:39.890060 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 13:22:39.890130 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 13:22:39.890205 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 13:22:39.890277 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:22:39.890353 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 13:22:39.890473 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 13:22:39.890587 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 13:22:39.890655 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 13:22:39.890719 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 13:22:39.890791 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 13:22:39.890863 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 13:22:39.890927 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 13:22:39.890993 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 13:22:39.891056 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 13:22:39.891130 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 13:22:39.891199 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 13:22:39.891264 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 13:22:39.891329 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 13:22:39.891392 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 13:22:39.891522 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 13:22:39.891603 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 13:22:39.891670 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 13:22:39.891739 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 13:22:39.891801 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 13:22:39.891863 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 13:22:39.891932 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 13:22:39.891999 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 13:22:39.892063 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 13:22:39.892129 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 13:22:39.892191 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 13:22:39.892256 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 13:22:39.892326 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 13:22:39.892393 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 13:22:39.892545 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 13:22:39.892616 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 13:22:39.892685 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 13:22:39.892751 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 13:22:39.892822 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 13:22:39.892894 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 13:22:39.892960 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 13:22:39.893026 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 13:22:39.893089 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 13:22:39.893151 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 13:22:39.893215 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 13:22:39.893280 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 13:22:39.893343 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 13:22:39.893408 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 13:22:39.893484 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 13:22:39.893591 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 13:22:39.893660 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 13:22:39.893724 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 13:22:39.893788 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 13:22:39.893855 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:22:39.893913 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:22:39.893975 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:22:39.894050 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 13:22:39.894112 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 13:22:39.894171 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 13:22:39.894238 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 13:22:39.894298 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 13:22:39.894361 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 13:22:39.894454 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 13:22:39.894538 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 13:22:39.894602 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 13:22:39.894672 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 13:22:39.894733 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 13:22:39.894792 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 13:22:39.894862 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 13:22:39.894922 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 13:22:39.894984 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 13:22:39.895051 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 13:22:39.895114 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 13:22:39.895173 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 13:22:39.895242 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 13:22:39.895301 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 13:22:39.895362 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 13:22:39.895443 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 13:22:39.895521 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 13:22:39.895595 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 13:22:39.895663 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 13:22:39.895724 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 13:22:39.895783 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 13:22:39.895793 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:22:39.895801 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:22:39.895808 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:22:39.895818 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:22:39.895825 kernel: iommu: Default domain type: Translated Jan 30 13:22:39.895833 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:22:39.895840 kernel: efivars: Registered efivars operations Jan 30 13:22:39.895848 kernel: vgaarb: loaded Jan 30 13:22:39.895855 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:22:39.895863 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:22:39.895871 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:22:39.895879 kernel: pnp: PnP ACPI init Jan 30 13:22:39.895967 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:22:39.895981 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:22:39.895989 kernel: NET: Registered PF_INET protocol family Jan 30 13:22:39.895996 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:39.896004 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:22:39.896011 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:22:39.896021 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:22:39.896028 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:22:39.896038 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:22:39.896047 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:39.896056 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:39.896065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:22:39.896151 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 13:22:39.896163 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:22:39.896171 kernel: kvm [1]: HYP mode not available Jan 30 13:22:39.896179 kernel: Initialise system trusted keyrings Jan 30 13:22:39.896188 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:22:39.896196 kernel: Key type asymmetric registered Jan 30 13:22:39.896208 kernel: Asymmetric key parser 'x509' registered Jan 30 13:22:39.896215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:22:39.896223 kernel: io scheduler mq-deadline registered Jan 30 13:22:39.896232 kernel: io scheduler kyber registered Jan 30 13:22:39.896241 kernel: io scheduler bfq registered Jan 30 13:22:39.896250 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 13:22:39.896327 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 13:22:39.896394 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 13:22:39.896570 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.896665 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 13:22:39.896730 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 13:22:39.896800 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.896867 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 13:22:39.896932 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 13:22:39.896999 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.897065 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 13:22:39.897130 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 13:22:39.897193 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.897259 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 13:22:39.897322 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 13:22:39.897389 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.897471 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 13:22:39.897553 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 13:22:39.897620 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.897687 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 13:22:39.897751 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 13:22:39.897820 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.897886 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 13:22:39.897952 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 13:22:39.898019 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.898030 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 13:22:39.898094 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 13:22:39.898164 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 13:22:39.898229 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:39.898240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:22:39.898247 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:22:39.898255 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:22:39.898325 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 13:22:39.898397 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 13:22:39.898407 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:22:39.898426 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 13:22:39.898495 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 13:22:39.898514 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 13:22:39.898523 kernel: thunder_xcv, ver 1.0 Jan 30 13:22:39.898530 kernel: thunder_bgx, ver 1.0 Jan 30 13:22:39.898537 kernel: nicpf, ver 1.0 Jan 30 13:22:39.898545 kernel: nicvf, ver 1.0 Jan 30 13:22:39.898628 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:22:39.898697 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:22:39 UTC (1738243359) Jan 30 13:22:39.898707 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:22:39.898714 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:22:39.898722 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:22:39.898729 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:22:39.898737 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:22:39.898744 kernel: Segment Routing with IPv6 Jan 30 13:22:39.898751 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:22:39.898759 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:22:39.898769 kernel: Key type dns_resolver registered Jan 30 13:22:39.898776 kernel: registered taskstats version 1 Jan 30 13:22:39.898783 kernel: Loading compiled-in X.509 certificates Jan 30 13:22:39.898791 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:22:39.898798 kernel: Key type .fscrypt registered Jan 30 13:22:39.898806 kernel: Key type fscrypt-provisioning registered Jan 30 13:22:39.898814 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:22:39.898821 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:22:39.898828 kernel: ima: No architecture policies found Jan 30 13:22:39.898838 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:22:39.898845 kernel: clk: Disabling unused clocks Jan 30 13:22:39.898853 kernel: Freeing unused kernel memory: 39936K Jan 30 13:22:39.898860 kernel: Run /init as init process Jan 30 13:22:39.898869 kernel: with arguments: Jan 30 13:22:39.898877 kernel: /init Jan 30 13:22:39.898884 kernel: with environment: Jan 30 13:22:39.898891 kernel: HOME=/ Jan 30 13:22:39.898898 kernel: TERM=linux Jan 30 13:22:39.898907 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:22:39.898917 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:39.898926 systemd[1]: Detected virtualization kvm. Jan 30 13:22:39.898934 systemd[1]: Detected architecture arm64. Jan 30 13:22:39.898942 systemd[1]: Running in initrd. Jan 30 13:22:39.898950 systemd[1]: No hostname configured, using default hostname. Jan 30 13:22:39.898957 systemd[1]: Hostname set to . Jan 30 13:22:39.898968 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:22:39.898975 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:22:39.898986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:39.898995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:39.899006 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:22:39.899015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:39.899024 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:22:39.899034 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:22:39.899047 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:22:39.899057 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:22:39.899066 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:39.899075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:39.899083 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:39.899091 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:39.899099 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:39.899108 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:39.899116 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:39.899125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:39.899133 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:22:39.899141 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:22:39.899149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:39.899157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:39.899165 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:39.899173 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:39.899182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:22:39.899190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:39.899198 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:22:39.899206 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:22:39.899214 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:39.899222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:39.899233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:39.899242 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:39.899274 systemd-journald[237]: Collecting audit messages is disabled. Jan 30 13:22:39.899295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:39.899304 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:22:39.899315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:39.899323 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:39.899331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:39.899340 systemd-journald[237]: Journal started Jan 30 13:22:39.899370 systemd-journald[237]: Runtime Journal (/run/log/journal/f2d4a9f946d04bc4af9d40aa80caa792) is 8.0M, max 76.6M, 68.6M free. Jan 30 13:22:39.885457 systemd-modules-load[238]: Inserted module 'overlay' Jan 30 13:22:39.902026 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:39.904429 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:22:39.906020 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:39.908858 kernel: Bridge firewalling registered Jan 30 13:22:39.908307 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 30 13:22:39.910014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:39.919637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:39.922469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:39.926045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:39.927779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:39.943778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:39.944655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:39.952660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:39.953712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:39.958616 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:22:39.975337 dracut-cmdline[274]: dracut-dracut-053 Jan 30 13:22:39.977851 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:39.988835 systemd-resolved[272]: Positive Trust Anchors: Jan 30 13:22:39.988851 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:39.988882 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:39.996935 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 30 13:22:39.997984 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:39.999278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:40.067484 kernel: SCSI subsystem initialized Jan 30 13:22:40.072466 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:22:40.079465 kernel: iscsi: registered transport (tcp) Jan 30 13:22:40.093509 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:22:40.093587 kernel: QLogic iSCSI HBA Driver Jan 30 13:22:40.142711 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:40.150721 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:22:40.168264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:22:40.168328 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:22:40.168340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:22:40.217470 kernel: raid6: neonx8 gen() 15332 MB/s Jan 30 13:22:40.234474 kernel: raid6: neonx4 gen() 15115 MB/s Jan 30 13:22:40.251494 kernel: raid6: neonx2 gen() 12895 MB/s Jan 30 13:22:40.268487 kernel: raid6: neonx1 gen() 10217 MB/s Jan 30 13:22:40.285546 kernel: raid6: int64x8 gen() 6357 MB/s Jan 30 13:22:40.302551 kernel: raid6: int64x4 gen() 7196 MB/s Jan 30 13:22:40.319493 kernel: raid6: int64x2 gen() 6024 MB/s Jan 30 13:22:40.336493 kernel: raid6: int64x1 gen() 4980 MB/s Jan 30 13:22:40.336597 kernel: raid6: using algorithm neonx8 gen() 15332 MB/s Jan 30 13:22:40.353510 kernel: raid6: .... xor() 11734 MB/s, rmw enabled Jan 30 13:22:40.353600 kernel: raid6: using neon recovery algorithm Jan 30 13:22:40.358639 kernel: xor: measuring software checksum speed Jan 30 13:22:40.358762 kernel: 8regs : 21658 MB/sec Jan 30 13:22:40.358775 kernel: 32regs : 18407 MB/sec Jan 30 13:22:40.359602 kernel: arm64_neon : 25536 MB/sec Jan 30 13:22:40.359641 kernel: xor: using function: arm64_neon (25536 MB/sec) Jan 30 13:22:40.412459 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:22:40.428452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:40.436683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:40.452450 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 30 13:22:40.456105 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:40.467825 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:22:40.485834 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 30 13:22:40.525612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:40.530630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:40.582005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:40.589263 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:22:40.607945 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:40.610651 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:40.613028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:40.615023 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:40.621595 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:22:40.639467 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:40.670478 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:22:40.677442 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:22:40.677522 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 13:22:40.704553 kernel: ACPI: bus type USB registered Jan 30 13:22:40.704624 kernel: usbcore: registered new interface driver usbfs Jan 30 13:22:40.704637 kernel: usbcore: registered new interface driver hub Jan 30 13:22:40.704647 kernel: usbcore: registered new device driver usb Jan 30 13:22:40.708003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:40.709405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:40.711327 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:40.712963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:40.713093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:40.714982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:40.720653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:40.729587 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 13:22:40.735302 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 13:22:40.735443 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:22:40.735454 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:22:40.742655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:40.754447 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 13:22:40.771944 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 13:22:40.772383 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 13:22:40.772966 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 13:22:40.773077 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 13:22:40.773453 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 13:22:40.775732 kernel: hub 1-0:1.0: USB hub found Jan 30 13:22:40.775845 kernel: hub 1-0:1.0: 4 ports detected Jan 30 13:22:40.775923 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 13:22:40.778572 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 13:22:40.778701 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 13:22:40.778782 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 13:22:40.778867 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 13:22:40.778970 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:22:40.779056 kernel: hub 2-0:1.0: USB hub found Jan 30 13:22:40.779172 kernel: hub 2-0:1.0: 4 ports detected Jan 30 13:22:40.779259 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:22:40.779269 kernel: GPT:17805311 != 80003071 Jan 30 13:22:40.779278 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:22:40.779287 kernel: GPT:17805311 != 80003071 Jan 30 13:22:40.779295 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:22:40.779304 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:40.779313 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 13:22:40.755704 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:40.777156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:40.838530 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (500) Jan 30 13:22:40.840775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 13:22:40.841692 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (497) Jan 30 13:22:40.850391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 13:22:40.858932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 13:22:40.865090 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 13:22:40.866110 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 13:22:40.873685 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:22:40.881637 disk-uuid[571]: Primary Header is updated. Jan 30 13:22:40.881637 disk-uuid[571]: Secondary Entries is updated. Jan 30 13:22:40.881637 disk-uuid[571]: Secondary Header is updated. Jan 30 13:22:40.889516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:41.006485 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 13:22:41.248548 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 13:22:41.383859 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 13:22:41.383932 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 13:22:41.384132 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 13:22:41.439683 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 13:22:41.440135 kernel: usbcore: registered new interface driver usbhid Jan 30 13:22:41.440164 kernel: usbhid: USB HID core driver Jan 30 13:22:41.899462 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:41.900861 disk-uuid[572]: The operation has completed successfully. Jan 30 13:22:41.952881 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:22:41.952986 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:22:41.971719 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:22:41.976681 sh[588]: Success Jan 30 13:22:41.989460 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:22:42.039918 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:22:42.049621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:22:42.055555 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:22:42.070933 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:22:42.071006 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:42.072268 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:22:42.073156 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:22:42.073930 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:22:42.081440 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:22:42.084065 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:22:42.084757 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:22:42.094761 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:22:42.099729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:22:42.113082 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:42.113165 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:42.113199 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:42.118446 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:22:42.118520 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:42.128225 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:22:42.129213 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:42.134745 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:22:42.143669 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:22:42.232464 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:42.235025 ignition[674]: Ignition 2.20.0 Jan 30 13:22:42.235035 ignition[674]: Stage: fetch-offline Jan 30 13:22:42.239734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:42.235070 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.240917 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:42.235079 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:42.235227 ignition[674]: parsed url from cmdline: "" Jan 30 13:22:42.235230 ignition[674]: no config URL provided Jan 30 13:22:42.235234 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:42.235241 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:42.235246 ignition[674]: failed to fetch config: resource requires networking Jan 30 13:22:42.235452 ignition[674]: Ignition finished successfully Jan 30 13:22:42.265888 systemd-networkd[775]: lo: Link UP Jan 30 13:22:42.265902 systemd-networkd[775]: lo: Gained carrier Jan 30 13:22:42.268510 systemd-networkd[775]: Enumeration completed Jan 30 13:22:42.269304 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:42.269308 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:42.270590 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:42.270593 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:42.271381 systemd-networkd[775]: eth0: Link UP Jan 30 13:22:42.271385 systemd-networkd[775]: eth0: Gained carrier Jan 30 13:22:42.271392 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:42.272600 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:42.273627 systemd[1]: Reached target network.target - Network. Jan 30 13:22:42.276880 systemd-networkd[775]: eth1: Link UP Jan 30 13:22:42.276884 systemd-networkd[775]: eth1: Gained carrier Jan 30 13:22:42.276896 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:42.279666 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:22:42.295022 ignition[778]: Ignition 2.20.0 Jan 30 13:22:42.295039 ignition[778]: Stage: fetch Jan 30 13:22:42.295205 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.295215 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:42.295302 ignition[778]: parsed url from cmdline: "" Jan 30 13:22:42.295305 ignition[778]: no config URL provided Jan 30 13:22:42.295310 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:42.295317 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:42.295398 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 13:22:42.296389 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 13:22:42.314555 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:22:42.332585 systemd-networkd[775]: eth0: DHCPv4 address 5.75.240.180/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 13:22:42.496648 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 13:22:42.503578 ignition[778]: GET result: OK Jan 30 13:22:42.503750 ignition[778]: parsing config with SHA512: aa780800666c004ab90cd26e1c48d32ea1812f2c0b8e43e6b84596b4167851cc70febb255d766248e5f8c9a7628d74b689ef868bb93a318a6f7c771b1a618565 Jan 30 13:22:42.510733 unknown[778]: fetched base config from "system" Jan 30 13:22:42.510744 unknown[778]: fetched base config from "system" Jan 30 13:22:42.511138 ignition[778]: fetch: fetch complete Jan 30 13:22:42.510749 unknown[778]: fetched user config from "hetzner" Jan 30 13:22:42.511143 ignition[778]: fetch: fetch passed Jan 30 13:22:42.513726 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:22:42.511190 ignition[778]: Ignition finished successfully Jan 30 13:22:42.520683 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:22:42.535981 ignition[786]: Ignition 2.20.0 Jan 30 13:22:42.535992 ignition[786]: Stage: kargs Jan 30 13:22:42.536174 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.536184 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:42.537177 ignition[786]: kargs: kargs passed Jan 30 13:22:42.539672 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:22:42.537228 ignition[786]: Ignition finished successfully Jan 30 13:22:42.545826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:22:42.562877 ignition[793]: Ignition 2.20.0 Jan 30 13:22:42.562889 ignition[793]: Stage: disks Jan 30 13:22:42.563082 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.563093 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:42.564122 ignition[793]: disks: disks passed Jan 30 13:22:42.564175 ignition[793]: Ignition finished successfully Jan 30 13:22:42.565732 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:22:42.566868 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:42.568652 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:22:42.569357 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:42.570012 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:42.571089 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:42.581768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:22:42.602000 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:22:42.606688 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:22:42.614526 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:22:42.655489 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:22:42.657014 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:22:42.659028 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:42.666615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:42.670161 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:22:42.673802 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:22:42.676571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:22:42.678020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:42.681766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:22:42.686464 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Jan 30 13:22:42.690554 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:42.690620 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:42.690639 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:42.692196 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:22:42.702857 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:22:42.702924 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:42.708063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:42.742178 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:22:42.749691 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:22:42.753412 coreos-metadata[812]: Jan 30 13:22:42.753 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 13:22:42.755955 coreos-metadata[812]: Jan 30 13:22:42.755 INFO Fetch successful Jan 30 13:22:42.755955 coreos-metadata[812]: Jan 30 13:22:42.755 INFO wrote hostname ci-4186-1-0-7-1c3f91851a to /sysroot/etc/hostname Jan 30 13:22:42.759314 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:22:42.761171 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:42.765404 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:22:42.874036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:42.878781 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:22:42.881918 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:22:42.891441 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:42.913398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:22:42.914897 ignition[928]: INFO : Ignition 2.20.0 Jan 30 13:22:42.914897 ignition[928]: INFO : Stage: mount Jan 30 13:22:42.914897 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.914897 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:42.917988 ignition[928]: INFO : mount: mount passed Jan 30 13:22:42.917988 ignition[928]: INFO : Ignition finished successfully Jan 30 13:22:42.917373 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:22:42.923584 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:22:43.069242 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:22:43.076641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:43.087011 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Jan 30 13:22:43.087080 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:43.087157 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:43.087810 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:43.090439 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:22:43.090569 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:43.093637 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:43.114054 ignition[957]: INFO : Ignition 2.20.0 Jan 30 13:22:43.114789 ignition[957]: INFO : Stage: files Jan 30 13:22:43.115144 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:43.115144 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:43.116764 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:22:43.117864 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:22:43.117864 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:22:43.122379 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:22:43.123335 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:22:43.123335 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:22:43.122945 unknown[957]: wrote ssh authorized keys file for user: core Jan 30 13:22:43.127101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:43.127101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:43.317825 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:22:43.457582 systemd-networkd[775]: eth1: Gained IPv6LL Jan 30 13:22:43.905812 systemd-networkd[775]: eth0: Gained IPv6LL Jan 30 13:22:44.061595 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:44.061595 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:44.063961 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:44.625732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:22:44.709538 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:44.710781 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:44.720946 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:44.720946 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:44.720946 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 13:22:45.270670 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:22:45.615296 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:45.615296 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:45.617625 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:45.630596 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:45.630596 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:45.630596 ignition[957]: INFO : files: files passed Jan 30 13:22:45.630596 ignition[957]: INFO : Ignition finished successfully Jan 30 13:22:45.620273 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:22:45.626864 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:22:45.631787 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:22:45.634530 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:22:45.634662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:22:45.644902 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:45.644902 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:45.647736 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:45.649406 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:45.650179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:22:45.655667 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:22:45.694107 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:22:45.694251 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:22:45.696323 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:22:45.697092 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:22:45.698158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:22:45.703740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:22:45.720292 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:45.728744 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:22:45.742808 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:45.744303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:45.745944 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:22:45.746844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:22:45.747066 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:45.749658 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:22:45.751398 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:22:45.752288 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:22:45.753476 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:45.754633 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:45.755695 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:22:45.756627 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:45.757808 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:22:45.758845 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:22:45.759860 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:22:45.760647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:22:45.760823 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:45.761917 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:45.763053 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:45.764139 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:22:45.768525 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:45.769473 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:22:45.769616 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:45.772182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:22:45.772388 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:45.774266 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:22:45.774467 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:22:45.775927 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:22:45.776025 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:45.790861 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:22:45.792060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:22:45.792231 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:45.794335 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:22:45.797196 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:22:45.797375 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:45.801346 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:22:45.801691 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:45.808884 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:22:45.808976 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:22:45.814217 ignition[1010]: INFO : Ignition 2.20.0 Jan 30 13:22:45.814217 ignition[1010]: INFO : Stage: umount Jan 30 13:22:45.815219 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:45.815219 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:45.817195 ignition[1010]: INFO : umount: umount passed Jan 30 13:22:45.817195 ignition[1010]: INFO : Ignition finished successfully Jan 30 13:22:45.820888 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:22:45.823805 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:22:45.823953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:22:45.825028 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:22:45.825082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:22:45.826943 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:22:45.827001 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:22:45.827830 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:22:45.827887 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:22:45.828545 systemd[1]: Stopped target network.target - Network. Jan 30 13:22:45.829075 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:22:45.829120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:45.833923 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:22:45.834438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:22:45.834491 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:45.835134 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:22:45.837808 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:22:45.840655 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:22:45.840729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:45.845173 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:22:45.845217 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:45.845901 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:22:45.845954 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:22:45.846589 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:22:45.846631 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:45.847380 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:22:45.850307 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:22:45.852471 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 30 13:22:45.859511 systemd-networkd[775]: eth1: DHCPv6 lease lost Jan 30 13:22:45.864402 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:22:45.864598 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:22:45.866946 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:22:45.867033 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:22:45.871071 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:22:45.871928 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:22:45.876480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:22:45.876535 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:45.877360 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:22:45.877543 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:45.892110 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:22:45.892975 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:22:45.893057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:45.894762 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:22:45.894815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:45.896029 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:22:45.896071 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:45.897035 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:22:45.897072 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:45.898456 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:45.910471 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:22:45.911338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:22:45.915173 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:22:45.915341 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:45.917371 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:22:45.917440 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:45.920123 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:22:45.920180 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:45.921467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:22:45.921546 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:45.923347 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:22:45.923486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:45.925097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:45.925151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:45.931674 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:22:45.932668 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:22:45.932760 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:45.936061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:45.936141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:45.937873 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:22:45.938000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:22:45.942506 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:22:45.951667 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:22:45.960994 systemd[1]: Switching root. Jan 30 13:22:45.996061 systemd-journald[237]: Journal stopped Jan 30 13:22:46.830948 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 30 13:22:46.831029 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:22:46.831042 kernel: SELinux: policy capability open_perms=1 Jan 30 13:22:46.831055 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:22:46.831064 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:22:46.831073 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:22:46.831082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:22:46.831091 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:22:46.831100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:22:46.831113 kernel: audit: type=1403 audit(1738243366.112:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:22:46.831124 systemd[1]: Successfully loaded SELinux policy in 40.214ms. Jan 30 13:22:46.831146 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.556ms. Jan 30 13:22:46.831157 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:46.831167 systemd[1]: Detected virtualization kvm. Jan 30 13:22:46.831177 systemd[1]: Detected architecture arm64. Jan 30 13:22:46.831187 systemd[1]: Detected first boot. Jan 30 13:22:46.831197 systemd[1]: Hostname set to . Jan 30 13:22:46.831219 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:22:46.831230 zram_generator::config[1053]: No configuration found. Jan 30 13:22:46.831243 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:22:46.831253 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:22:46.831263 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:22:46.831273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:22:46.831284 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:22:46.831294 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:22:46.831304 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:22:46.831318 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:22:46.831329 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:22:46.831342 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:22:46.831352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:22:46.831362 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:22:46.831372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:46.831382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:46.831392 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:22:46.831402 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:22:46.831465 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:22:46.831483 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:46.831494 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:22:46.831503 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:46.831513 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:22:46.831524 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:22:46.831534 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:46.831544 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:22:46.831556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:46.831566 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:46.831576 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:46.831586 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:46.831596 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:22:46.831606 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:22:46.831616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:46.831626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:46.831636 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:46.831648 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:22:46.831657 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:22:46.831667 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:22:46.831677 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:22:46.831687 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:22:46.831700 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:22:46.831714 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:22:46.831724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:22:46.831734 systemd[1]: Reached target machines.target - Containers. Jan 30 13:22:46.831744 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:22:46.831754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:46.831765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:46.831775 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:22:46.831785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:46.831797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:46.831807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:46.831817 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:22:46.831828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:46.831839 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:22:46.831849 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:22:46.831859 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:22:46.831868 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:22:46.831880 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:22:46.831890 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:46.831900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:46.831911 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:22:46.831921 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:22:46.831931 kernel: fuse: init (API version 7.39) Jan 30 13:22:46.831941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:46.831951 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:22:46.831960 kernel: loop: module loaded Jan 30 13:22:46.831970 systemd[1]: Stopped verity-setup.service. Jan 30 13:22:46.831981 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:22:46.831991 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:22:46.832001 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:22:46.832016 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:22:46.832028 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:22:46.832038 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:22:46.832048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:46.832058 kernel: ACPI: bus type drm_connector registered Jan 30 13:22:46.832067 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:22:46.832077 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:22:46.832089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:46.832099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:46.832141 systemd-journald[1117]: Collecting audit messages is disabled. Jan 30 13:22:46.832172 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:46.832185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:46.832196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:46.832206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:46.832219 systemd-journald[1117]: Journal started Jan 30 13:22:46.832247 systemd-journald[1117]: Runtime Journal (/run/log/journal/f2d4a9f946d04bc4af9d40aa80caa792) is 8.0M, max 76.6M, 68.6M free. Jan 30 13:22:46.571392 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:22:46.834609 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:46.597149 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:22:46.598018 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:22:46.834708 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:22:46.835401 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:22:46.836697 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:46.836834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:46.838818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:46.839755 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:22:46.840814 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:22:46.858718 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:22:46.865621 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:22:46.877596 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:22:46.878215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:22:46.878255 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:46.881677 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:22:46.885672 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:22:46.891604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:22:46.892297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:46.897809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:22:46.908590 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:22:46.909609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:46.911496 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:22:46.913492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:46.917826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:46.922001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:22:46.926542 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:22:46.928585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:22:46.929676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:46.931265 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:22:46.933700 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:22:46.941144 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:22:46.948364 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:22:46.954924 kernel: loop0: detected capacity change from 0 to 189592 Jan 30 13:22:46.955882 systemd-journald[1117]: Time spent on flushing to /var/log/journal/f2d4a9f946d04bc4af9d40aa80caa792 is 86.906ms for 1135 entries. Jan 30 13:22:46.955882 systemd-journald[1117]: System Journal (/var/log/journal/f2d4a9f946d04bc4af9d40aa80caa792) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:22:47.053967 systemd-journald[1117]: Received client request to flush runtime journal. Jan 30 13:22:47.054021 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:22:47.054041 kernel: loop1: detected capacity change from 0 to 113552 Jan 30 13:22:46.964669 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:22:46.973936 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:22:46.982678 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:22:46.988485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:47.032754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:22:47.039532 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:22:47.043473 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:22:47.060799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:22:47.073675 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:22:47.083292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:47.088479 kernel: loop2: detected capacity change from 0 to 116784 Jan 30 13:22:47.130266 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 30 13:22:47.130282 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 30 13:22:47.136493 kernel: loop3: detected capacity change from 0 to 8 Jan 30 13:22:47.142854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:47.158449 kernel: loop4: detected capacity change from 0 to 189592 Jan 30 13:22:47.183467 kernel: loop5: detected capacity change from 0 to 113552 Jan 30 13:22:47.211524 kernel: loop6: detected capacity change from 0 to 116784 Jan 30 13:22:47.238861 kernel: loop7: detected capacity change from 0 to 8 Jan 30 13:22:47.239713 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 13:22:47.241213 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 30 13:22:47.252730 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:22:47.252747 systemd[1]: Reloading... Jan 30 13:22:47.378501 zram_generator::config[1220]: No configuration found. Jan 30 13:22:47.418500 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:22:47.507108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:47.553641 systemd[1]: Reloading finished in 300 ms. Jan 30 13:22:47.576698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:22:47.580805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:22:47.595525 systemd[1]: Starting ensure-sysext.service... Jan 30 13:22:47.598256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:47.606906 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:22:47.606922 systemd[1]: Reloading... Jan 30 13:22:47.639809 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:22:47.640025 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:22:47.640732 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:22:47.640928 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:22:47.640976 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:22:47.647115 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:47.647982 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:22:47.662084 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:47.662226 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:22:47.700451 zram_generator::config[1287]: No configuration found. Jan 30 13:22:47.799208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:47.845122 systemd[1]: Reloading finished in 237 ms. Jan 30 13:22:47.863575 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:22:47.870391 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:47.884000 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:22:47.886638 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:22:47.892654 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:22:47.897754 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:47.901983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:47.905623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:22:47.918692 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:22:47.921189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:47.930925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:47.934751 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:47.937723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:47.939634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:47.944102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:47.944887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:47.949955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:47.955724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:47.956384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:47.958203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:22:47.971714 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:22:47.974991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:47.977899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:47.979126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:47.982775 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:22:47.991151 systemd[1]: Finished ensure-sysext.service. Jan 30 13:22:47.992933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:47.993797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:47.995820 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:47.996185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:47.998890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:47.999533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:48.005807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:48.005911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:48.009645 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:22:48.015833 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:48.016011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:48.019353 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 30 13:22:48.032904 augenrules[1364]: No rules Jan 30 13:22:48.034474 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:22:48.035123 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:22:48.036154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:22:48.055342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:48.065777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:48.066707 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:22:48.074303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:22:48.076780 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:22:48.201797 systemd-networkd[1378]: lo: Link UP Jan 30 13:22:48.201808 systemd-networkd[1378]: lo: Gained carrier Jan 30 13:22:48.203299 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:22:48.204703 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:22:48.213597 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:22:48.255525 systemd-resolved[1327]: Positive Trust Anchors: Jan 30 13:22:48.255848 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:48.255882 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:48.261999 systemd-networkd[1378]: Enumeration completed Jan 30 13:22:48.262115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:48.262301 systemd-resolved[1327]: Using system hostname 'ci-4186-1-0-7-1c3f91851a'. Jan 30 13:22:48.264735 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:48.264746 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:48.265818 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:48.265821 systemd-networkd[1378]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:48.266499 systemd-networkd[1378]: eth0: Link UP Jan 30 13:22:48.266508 systemd-networkd[1378]: eth0: Gained carrier Jan 30 13:22:48.266524 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:48.272766 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:22:48.273488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:48.274175 systemd[1]: Reached target network.target - Network. Jan 30 13:22:48.275049 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:48.276787 systemd-networkd[1378]: eth1: Link UP Jan 30 13:22:48.276796 systemd-networkd[1378]: eth1: Gained carrier Jan 30 13:22:48.276822 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:48.309366 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:48.315548 systemd-networkd[1378]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:22:48.316260 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Jan 30 13:22:48.323376 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:48.339551 systemd-networkd[1378]: eth0: DHCPv4 address 5.75.240.180/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 13:22:48.339895 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Jan 30 13:22:48.340093 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Jan 30 13:22:48.350508 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:22:48.352015 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 13:22:48.352155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:48.360201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:48.368632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:48.377458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:48.378730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:48.378774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:22:48.380824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:48.381007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:48.395819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:48.397676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:48.401863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:48.405661 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 13:22:48.405722 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:22:48.405761 kernel: [drm] features: -context_init Jan 30 13:22:48.406300 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:48.406514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:48.407667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:48.408588 kernel: [drm] number of scanouts: 1 Jan 30 13:22:48.410604 kernel: [drm] number of cap sets: 0 Jan 30 13:22:48.415372 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 13:22:48.426786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:48.433456 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1385) Jan 30 13:22:48.434995 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 13:22:48.458446 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:22:48.471215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:48.471830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:48.487713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:48.491064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 13:22:48.494689 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:22:48.511550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:22:48.560246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:48.580289 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:22:48.599046 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:22:48.611440 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:48.640730 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:22:48.642651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:48.643594 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:48.644342 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:22:48.645319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:22:48.646247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:22:48.646977 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:22:48.647706 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:22:48.648335 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:22:48.648366 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:48.648930 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:48.650961 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:22:48.653384 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:22:48.658725 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:22:48.661151 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:22:48.662604 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:22:48.663373 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:48.664161 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:48.664739 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:48.664766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:48.666578 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:22:48.671595 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:22:48.675508 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:48.675678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:22:48.681440 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:22:48.685775 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:22:48.688506 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:22:48.691626 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:22:48.695704 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:22:48.697707 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 13:22:48.699497 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:22:48.703607 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:22:48.709608 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:22:48.710895 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:22:48.711972 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:22:48.713324 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:22:48.719795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:22:48.724175 jq[1454]: false Jan 30 13:22:48.735078 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:22:48.737053 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:22:48.754115 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:22:48.771642 dbus-daemon[1453]: [system] SELinux support is enabled Jan 30 13:22:48.771813 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:22:48.776864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:22:48.776931 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:22:48.778938 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:22:48.778964 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:22:48.782826 coreos-metadata[1452]: Jan 30 13:22:48.780 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 13:22:48.783099 jq[1465]: true Jan 30 13:22:48.780982 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:22:48.782534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:22:48.789444 extend-filesystems[1455]: Found loop4 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found loop5 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found loop6 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found loop7 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda1 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda2 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda3 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found usr Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda4 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda6 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda7 Jan 30 13:22:48.789444 extend-filesystems[1455]: Found sda9 Jan 30 13:22:48.789444 extend-filesystems[1455]: Checking size of /dev/sda9 Jan 30 13:22:48.812716 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:22:48.836894 coreos-metadata[1452]: Jan 30 13:22:48.790 INFO Fetch successful Jan 30 13:22:48.836894 coreos-metadata[1452]: Jan 30 13:22:48.790 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 13:22:48.836894 coreos-metadata[1452]: Jan 30 13:22:48.793 INFO Fetch successful Jan 30 13:22:48.836986 tar[1470]: linux-arm64/helm Jan 30 13:22:48.837247 jq[1487]: true Jan 30 13:22:48.852449 extend-filesystems[1455]: Resized partition /dev/sda9 Jan 30 13:22:48.853071 update_engine[1464]: I20250130 13:22:48.852608 1464 main.cc:92] Flatcar Update Engine starting Jan 30 13:22:48.857555 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:22:48.859863 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:22:48.860086 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:22:48.875461 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 13:22:48.875546 update_engine[1464]: I20250130 13:22:48.866868 1464 update_check_scheduler.cc:74] Next update check in 11m37s Jan 30 13:22:48.876820 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:22:48.880907 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:22:48.929487 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1399) Jan 30 13:22:49.006686 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:22:49.008101 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:22:49.010930 systemd-logind[1463]: New seat seat0. Jan 30 13:22:49.017004 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:22:49.017074 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 13:22:49.017600 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:22:49.062828 bash[1526]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:49.062012 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:22:49.083303 systemd[1]: Starting sshkeys.service... Jan 30 13:22:49.099948 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:22:49.112255 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:22:49.137626 containerd[1480]: time="2025-01-30T13:22:49.137199960Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:22:49.145496 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 13:22:49.167535 coreos-metadata[1533]: Jan 30 13:22:49.167 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 13:22:49.169482 extend-filesystems[1498]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:22:49.169482 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 13:22:49.169482 extend-filesystems[1498]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 13:22:49.180552 coreos-metadata[1533]: Jan 30 13:22:49.168 INFO Fetch successful Jan 30 13:22:49.170440 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:22:49.180665 extend-filesystems[1455]: Resized filesystem in /dev/sda9 Jan 30 13:22:49.180665 extend-filesystems[1455]: Found sr0 Jan 30 13:22:49.170658 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:22:49.174045 unknown[1533]: wrote ssh authorized keys file for user: core Jan 30 13:22:49.178601 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:22:49.195596 containerd[1480]: time="2025-01-30T13:22:49.194518280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.196931 containerd[1480]: time="2025-01-30T13:22:49.196891520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:49.197165 containerd[1480]: time="2025-01-30T13:22:49.196978520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:22:49.197165 containerd[1480]: time="2025-01-30T13:22:49.196999880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:22:49.197428 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:49.197722 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.197748120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.197775000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.197845880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.197858080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.198017880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.198031640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.198043800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.198052920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.198120720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198552 containerd[1480]: time="2025-01-30T13:22:49.198307960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198783 containerd[1480]: time="2025-01-30T13:22:49.198764680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:49.198858 containerd[1480]: time="2025-01-30T13:22:49.198845400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:22:49.198992 containerd[1480]: time="2025-01-30T13:22:49.198975240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:22:49.199091 containerd[1480]: time="2025-01-30T13:22:49.199077520Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:22:49.204371 systemd[1]: Finished sshkeys.service. Jan 30 13:22:49.210046 containerd[1480]: time="2025-01-30T13:22:49.209991920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:22:49.210143 containerd[1480]: time="2025-01-30T13:22:49.210070280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:22:49.210143 containerd[1480]: time="2025-01-30T13:22:49.210088680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:22:49.210143 containerd[1480]: time="2025-01-30T13:22:49.210105960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:22:49.210143 containerd[1480]: time="2025-01-30T13:22:49.210123040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:22:49.210844 containerd[1480]: time="2025-01-30T13:22:49.210295040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211657800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211823240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211839800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211855600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211869880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211887400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211900000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211913560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211928040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211941600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211953880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211965400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.211986040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213439 containerd[1480]: time="2025-01-30T13:22:49.212000360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212014000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212027360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212039480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212053920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212066240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212078520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212091240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212105600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212117400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212128960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212141640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212155360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212175560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212192080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213729 containerd[1480]: time="2025-01-30T13:22:49.212203480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212441600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212467040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212478800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212491720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212500880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212514080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212524640Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:22:49.213965 containerd[1480]: time="2025-01-30T13:22:49.212534600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:22:49.214116 containerd[1480]: time="2025-01-30T13:22:49.212902320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:22:49.214116 containerd[1480]: time="2025-01-30T13:22:49.212953480Z" level=info msg="Connect containerd service" Jan 30 13:22:49.214116 containerd[1480]: time="2025-01-30T13:22:49.212980440Z" level=info msg="using legacy CRI server" Jan 30 13:22:49.214116 containerd[1480]: time="2025-01-30T13:22:49.212986960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:22:49.214116 containerd[1480]: time="2025-01-30T13:22:49.213246520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:22:49.215048 containerd[1480]: time="2025-01-30T13:22:49.214980440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:22:49.215879 containerd[1480]: time="2025-01-30T13:22:49.215859160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:22:49.216059 containerd[1480]: time="2025-01-30T13:22:49.216044280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:22:49.216329 containerd[1480]: time="2025-01-30T13:22:49.216264600Z" level=info msg="Start subscribing containerd event" Jan 30 13:22:49.216447 containerd[1480]: time="2025-01-30T13:22:49.216432240Z" level=info msg="Start recovering state" Jan 30 13:22:49.216563 containerd[1480]: time="2025-01-30T13:22:49.216550560Z" level=info msg="Start event monitor" Jan 30 13:22:49.216615 containerd[1480]: time="2025-01-30T13:22:49.216604000Z" level=info msg="Start snapshots syncer" Jan 30 13:22:49.216664 containerd[1480]: time="2025-01-30T13:22:49.216653880Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:22:49.216709 containerd[1480]: time="2025-01-30T13:22:49.216697960Z" level=info msg="Start streaming server" Jan 30 13:22:49.216883 containerd[1480]: time="2025-01-30T13:22:49.216870120Z" level=info msg="containerd successfully booted in 0.081036s" Jan 30 13:22:49.216959 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:22:49.498472 tar[1470]: linux-arm64/LICENSE Jan 30 13:22:49.498649 tar[1470]: linux-arm64/README.md Jan 30 13:22:49.511463 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:22:49.781332 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:22:49.793652 systemd-networkd[1378]: eth0: Gained IPv6LL Jan 30 13:22:49.794266 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Jan 30 13:22:49.798624 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:22:49.800473 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:22:49.812933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:49.816879 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:22:49.820687 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:22:49.843746 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:22:49.853939 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:22:49.854177 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:22:49.864083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:22:49.867904 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:22:49.875768 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:22:49.886600 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:22:49.891155 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:22:49.892106 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:22:49.985934 systemd-networkd[1378]: eth1: Gained IPv6LL Jan 30 13:22:49.988823 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Jan 30 13:22:50.484592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:50.486342 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:22:50.491520 systemd[1]: Startup finished in 772ms (kernel) + 6.418s (initrd) + 4.419s (userspace) = 11.610s. Jan 30 13:22:50.497048 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:22:50.510357 agetty[1578]: failed to open credentials directory Jan 30 13:22:50.511071 agetty[1577]: failed to open credentials directory Jan 30 13:22:51.004734 kubelet[1584]: E0130 13:22:51.004681 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:22:51.009002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:22:51.009164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:01.041930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:23:01.048725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:01.170067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:01.182153 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:01.231547 kubelet[1603]: E0130 13:23:01.231482 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:01.235460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:01.235780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:11.293888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:23:11.299730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:11.466916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:11.466946 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:11.518585 kubelet[1617]: E0130 13:23:11.518535 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:11.522061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:11.522641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:20.086348 systemd-timesyncd[1360]: Contacted time server 212.95.49.48:123 (2.flatcar.pool.ntp.org). Jan 30 13:23:20.086477 systemd-timesyncd[1360]: Initial clock synchronization to Thu 2025-01-30 13:23:19.771126 UTC. Jan 30 13:23:21.542518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:23:21.554756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:21.661902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:21.671869 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:21.712095 kubelet[1632]: E0130 13:23:21.712047 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:21.714644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:21.714780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:31.792229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:23:31.805916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:31.951977 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:31.951978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:32.006252 kubelet[1647]: E0130 13:23:32.006196 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:32.008963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:32.009127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:33.893371 update_engine[1464]: I20250130 13:23:33.893213 1464 update_attempter.cc:509] Updating boot flags... Jan 30 13:23:33.949442 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1663) Jan 30 13:23:34.004819 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1665) Jan 30 13:23:34.065437 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1665) Jan 30 13:23:42.041919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:23:42.050736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:42.168978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:42.181453 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:42.221850 kubelet[1683]: E0130 13:23:42.221726 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:42.224229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:42.224393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:52.292263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 13:23:52.301837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:52.422128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:52.427746 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:52.476528 kubelet[1698]: E0130 13:23:52.476466 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:52.479760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:52.480155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:02.542266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 13:24:02.555748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:02.683736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:02.685691 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:02.729062 kubelet[1714]: E0130 13:24:02.728998 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:02.731064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:02.731189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:12.792531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 13:24:12.805747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:12.917781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:12.931387 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:12.975643 kubelet[1729]: E0130 13:24:12.975571 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:12.978139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:12.978320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:23.041867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 13:24:23.049729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:23.183997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:23.189032 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:23.231978 kubelet[1744]: E0130 13:24:23.231886 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:23.234890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:23.235104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:33.292006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 13:24:33.300855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:33.416989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:33.429222 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:33.474427 kubelet[1759]: E0130 13:24:33.474300 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:33.476610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:33.476760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:43.542538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 13:24:43.559790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:43.668855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:43.682018 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:43.729086 kubelet[1774]: E0130 13:24:43.729011 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:43.733481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:43.733824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:46.687747 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:24:46.699892 systemd[1]: Started sshd@0-5.75.240.180:22-139.178.68.195:41524.service - OpenSSH per-connection server daemon (139.178.68.195:41524). Jan 30 13:24:47.699899 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 41524 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:47.702736 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:47.715945 systemd-logind[1463]: New session 1 of user core. Jan 30 13:24:47.718668 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:24:47.725569 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:24:47.741813 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:24:47.749011 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:24:47.753736 (systemd)[1786]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:24:47.868215 systemd[1786]: Queued start job for default target default.target. Jan 30 13:24:47.876458 systemd[1786]: Created slice app.slice - User Application Slice. Jan 30 13:24:47.876636 systemd[1786]: Reached target paths.target - Paths. Jan 30 13:24:47.876652 systemd[1786]: Reached target timers.target - Timers. Jan 30 13:24:47.878452 systemd[1786]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:24:47.894923 systemd[1786]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:24:47.895125 systemd[1786]: Reached target sockets.target - Sockets. Jan 30 13:24:47.895151 systemd[1786]: Reached target basic.target - Basic System. Jan 30 13:24:47.895251 systemd[1786]: Reached target default.target - Main User Target. Jan 30 13:24:47.895303 systemd[1786]: Startup finished in 134ms. Jan 30 13:24:47.895805 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:24:47.904737 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:24:48.609967 systemd[1]: Started sshd@1-5.75.240.180:22-139.178.68.195:41530.service - OpenSSH per-connection server daemon (139.178.68.195:41530). Jan 30 13:24:49.604641 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 41530 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:49.606925 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:49.613728 systemd-logind[1463]: New session 2 of user core. Jan 30 13:24:49.619527 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:24:50.292474 sshd[1799]: Connection closed by 139.178.68.195 port 41530 Jan 30 13:24:50.293703 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:50.297730 systemd[1]: sshd@1-5.75.240.180:22-139.178.68.195:41530.service: Deactivated successfully. Jan 30 13:24:50.299487 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:24:50.301226 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:24:50.303037 systemd-logind[1463]: Removed session 2. Jan 30 13:24:50.464065 systemd[1]: Started sshd@2-5.75.240.180:22-139.178.68.195:41542.service - OpenSSH per-connection server daemon (139.178.68.195:41542). Jan 30 13:24:51.464894 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 41542 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:51.466931 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:51.473441 systemd-logind[1463]: New session 3 of user core. Jan 30 13:24:51.479790 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:24:52.146067 sshd[1806]: Connection closed by 139.178.68.195 port 41542 Jan 30 13:24:52.146837 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:52.151409 systemd[1]: sshd@2-5.75.240.180:22-139.178.68.195:41542.service: Deactivated successfully. Jan 30 13:24:52.153351 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:24:52.154295 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:24:52.156043 systemd-logind[1463]: Removed session 3. Jan 30 13:24:52.324933 systemd[1]: Started sshd@3-5.75.240.180:22-139.178.68.195:41554.service - OpenSSH per-connection server daemon (139.178.68.195:41554). Jan 30 13:24:53.315675 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 41554 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:53.317494 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:53.322222 systemd-logind[1463]: New session 4 of user core. Jan 30 13:24:53.332785 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:24:53.792337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 13:24:53.803842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:53.940123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:53.951914 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:53.994896 kubelet[1823]: E0130 13:24:53.994763 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:53.997270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:53.997441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:54.005149 sshd[1813]: Connection closed by 139.178.68.195 port 41554 Jan 30 13:24:54.006402 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:54.011778 systemd[1]: sshd@3-5.75.240.180:22-139.178.68.195:41554.service: Deactivated successfully. Jan 30 13:24:54.014875 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:24:54.016012 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:24:54.017299 systemd-logind[1463]: Removed session 4. Jan 30 13:24:54.176672 systemd[1]: Started sshd@4-5.75.240.180:22-139.178.68.195:41566.service - OpenSSH per-connection server daemon (139.178.68.195:41566). Jan 30 13:24:55.163814 sshd[1833]: Accepted publickey for core from 139.178.68.195 port 41566 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:55.166119 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:55.172384 systemd-logind[1463]: New session 5 of user core. Jan 30 13:24:55.180773 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:24:55.691759 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:24:55.692031 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:55.706781 sudo[1836]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:55.865999 sshd[1835]: Connection closed by 139.178.68.195 port 41566 Jan 30 13:24:55.867365 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:55.872373 systemd[1]: sshd@4-5.75.240.180:22-139.178.68.195:41566.service: Deactivated successfully. Jan 30 13:24:55.874346 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:24:55.875189 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:24:55.876379 systemd-logind[1463]: Removed session 5. Jan 30 13:24:56.044263 systemd[1]: Started sshd@5-5.75.240.180:22-139.178.68.195:50716.service - OpenSSH per-connection server daemon (139.178.68.195:50716). Jan 30 13:24:57.015729 sshd[1841]: Accepted publickey for core from 139.178.68.195 port 50716 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:57.018190 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:57.023479 systemd-logind[1463]: New session 6 of user core. Jan 30 13:24:57.031834 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:24:57.533150 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:24:57.533498 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:57.537753 sudo[1845]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:57.543595 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:24:57.543901 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:57.560581 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:24:57.593468 augenrules[1867]: No rules Jan 30 13:24:57.595151 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:24:57.595331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:24:57.596577 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:57.753786 sshd[1843]: Connection closed by 139.178.68.195 port 50716 Jan 30 13:24:57.754354 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:57.760300 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:24:57.761241 systemd[1]: sshd@5-5.75.240.180:22-139.178.68.195:50716.service: Deactivated successfully. Jan 30 13:24:57.763696 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:24:57.765577 systemd-logind[1463]: Removed session 6. Jan 30 13:24:57.936236 systemd[1]: Started sshd@6-5.75.240.180:22-139.178.68.195:50730.service - OpenSSH per-connection server daemon (139.178.68.195:50730). Jan 30 13:24:58.923872 sshd[1875]: Accepted publickey for core from 139.178.68.195 port 50730 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:58.925747 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:58.932865 systemd-logind[1463]: New session 7 of user core. Jan 30 13:24:58.943040 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:24:59.446129 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:24:59.446845 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:59.766980 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:24:59.767654 (dockerd)[1897]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:25:00.002295 dockerd[1897]: time="2025-01-30T13:25:00.001950092Z" level=info msg="Starting up" Jan 30 13:25:00.101214 dockerd[1897]: time="2025-01-30T13:25:00.101160504Z" level=info msg="Loading containers: start." Jan 30 13:25:00.270451 kernel: Initializing XFRM netlink socket Jan 30 13:25:00.368603 systemd-networkd[1378]: docker0: Link UP Jan 30 13:25:00.409502 dockerd[1897]: time="2025-01-30T13:25:00.409266607Z" level=info msg="Loading containers: done." Jan 30 13:25:00.426098 dockerd[1897]: time="2025-01-30T13:25:00.426032102Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:25:00.426269 dockerd[1897]: time="2025-01-30T13:25:00.426142188Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:25:00.426445 dockerd[1897]: time="2025-01-30T13:25:00.426323038Z" level=info msg="Daemon has completed initialization" Jan 30 13:25:00.426758 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3916520671-merged.mount: Deactivated successfully. Jan 30 13:25:00.470473 dockerd[1897]: time="2025-01-30T13:25:00.469683776Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:25:00.469993 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:25:01.529483 containerd[1480]: time="2025-01-30T13:25:01.529173085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:25:02.220344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131631463.mount: Deactivated successfully. Jan 30 13:25:03.570694 containerd[1480]: time="2025-01-30T13:25:03.570491480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:03.573088 containerd[1480]: time="2025-01-30T13:25:03.573035056Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618162" Jan 30 13:25:03.574908 containerd[1480]: time="2025-01-30T13:25:03.574816232Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:03.579098 containerd[1480]: time="2025-01-30T13:25:03.578965615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:03.581399 containerd[1480]: time="2025-01-30T13:25:03.581353303Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.052132255s" Jan 30 13:25:03.582104 containerd[1480]: time="2025-01-30T13:25:03.581564114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 13:25:03.582241 containerd[1480]: time="2025-01-30T13:25:03.582211589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:25:04.042492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 13:25:04.049767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:04.191451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:04.206397 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:04.262731 kubelet[2144]: E0130 13:25:04.262342 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:04.266199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:04.266559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:05.139036 containerd[1480]: time="2025-01-30T13:25:05.138884224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:05.140387 containerd[1480]: time="2025-01-30T13:25:05.140327260Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469487" Jan 30 13:25:05.141095 containerd[1480]: time="2025-01-30T13:25:05.141022256Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:05.144229 containerd[1480]: time="2025-01-30T13:25:05.144140980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:05.145548 containerd[1480]: time="2025-01-30T13:25:05.145388565Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.563138814s" Jan 30 13:25:05.145548 containerd[1480]: time="2025-01-30T13:25:05.145441128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 13:25:05.146398 containerd[1480]: time="2025-01-30T13:25:05.146283172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:25:06.456002 containerd[1480]: time="2025-01-30T13:25:06.455885897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:06.458388 containerd[1480]: time="2025-01-30T13:25:06.458325344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024237" Jan 30 13:25:06.459882 containerd[1480]: time="2025-01-30T13:25:06.459751577Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:06.462748 containerd[1480]: time="2025-01-30T13:25:06.462673369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:06.464169 containerd[1480]: time="2025-01-30T13:25:06.464027919Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.317703584s" Jan 30 13:25:06.464169 containerd[1480]: time="2025-01-30T13:25:06.464071201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 13:25:06.464837 containerd[1480]: time="2025-01-30T13:25:06.464704194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:25:07.547381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671976089.mount: Deactivated successfully. Jan 30 13:25:07.835753 containerd[1480]: time="2025-01-30T13:25:07.834829722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:07.837317 containerd[1480]: time="2025-01-30T13:25:07.837266527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772143" Jan 30 13:25:07.838742 containerd[1480]: time="2025-01-30T13:25:07.838696240Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:07.843151 containerd[1480]: time="2025-01-30T13:25:07.843088745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:07.843926 containerd[1480]: time="2025-01-30T13:25:07.843886986Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.378950939s" Jan 30 13:25:07.843926 containerd[1480]: time="2025-01-30T13:25:07.843919467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 13:25:07.844570 containerd[1480]: time="2025-01-30T13:25:07.844531539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:25:08.483731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952455453.mount: Deactivated successfully. Jan 30 13:25:09.348605 containerd[1480]: time="2025-01-30T13:25:09.348537769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:09.350393 containerd[1480]: time="2025-01-30T13:25:09.350170771Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 13:25:09.351220 containerd[1480]: time="2025-01-30T13:25:09.351149420Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:09.354954 containerd[1480]: time="2025-01-30T13:25:09.354889007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:09.356525 containerd[1480]: time="2025-01-30T13:25:09.356346600Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.5117767s" Jan 30 13:25:09.356525 containerd[1480]: time="2025-01-30T13:25:09.356391322Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:25:09.359793 containerd[1480]: time="2025-01-30T13:25:09.359434195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:25:09.959215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242581430.mount: Deactivated successfully. Jan 30 13:25:09.968664 containerd[1480]: time="2025-01-30T13:25:09.967769106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:09.969747 containerd[1480]: time="2025-01-30T13:25:09.969705243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 30 13:25:09.970999 containerd[1480]: time="2025-01-30T13:25:09.970971586Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:09.973757 containerd[1480]: time="2025-01-30T13:25:09.973730725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:09.975561 containerd[1480]: time="2025-01-30T13:25:09.975532455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 616.059018ms" Jan 30 13:25:09.975673 containerd[1480]: time="2025-01-30T13:25:09.975657501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 13:25:09.976177 containerd[1480]: time="2025-01-30T13:25:09.976153646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:25:10.553552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042802519.mount: Deactivated successfully. Jan 30 13:25:11.892595 containerd[1480]: time="2025-01-30T13:25:11.892446755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:11.894308 containerd[1480]: time="2025-01-30T13:25:11.894258804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 30 13:25:11.895048 containerd[1480]: time="2025-01-30T13:25:11.894551578Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:11.899764 containerd[1480]: time="2025-01-30T13:25:11.899661749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:11.901460 containerd[1480]: time="2025-01-30T13:25:11.901304989Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.925024097s" Jan 30 13:25:11.901460 containerd[1480]: time="2025-01-30T13:25:11.901343671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 13:25:14.292478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 30 13:25:14.301576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:14.423666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:14.430304 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:14.470437 kubelet[2297]: E0130 13:25:14.470057 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:14.471809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:14.471936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:17.302459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:17.310927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:17.351972 systemd[1]: Reloading requested from client PID 2312 ('systemctl') (unit session-7.scope)... Jan 30 13:25:17.351991 systemd[1]: Reloading... Jan 30 13:25:17.471472 zram_generator::config[2355]: No configuration found. Jan 30 13:25:17.568407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:25:17.636068 systemd[1]: Reloading finished in 283 ms. Jan 30 13:25:17.696517 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:25:17.696899 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:25:17.697553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:17.704937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:17.821647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:17.830931 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:25:17.879910 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:17.879910 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:25:17.879910 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:17.880258 kubelet[2401]: I0130 13:25:17.880148 2401 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:25:18.253925 kubelet[2401]: I0130 13:25:18.252991 2401 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:25:18.253925 kubelet[2401]: I0130 13:25:18.253027 2401 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:25:18.253925 kubelet[2401]: I0130 13:25:18.253261 2401 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:25:18.281897 kubelet[2401]: E0130 13:25:18.281525 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://5.75.240.180:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:18.283013 kubelet[2401]: I0130 13:25:18.282701 2401 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:25:18.292094 kubelet[2401]: E0130 13:25:18.292007 2401 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:25:18.292094 kubelet[2401]: I0130 13:25:18.292046 2401 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:25:18.297360 kubelet[2401]: I0130 13:25:18.297324 2401 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:25:18.298712 kubelet[2401]: I0130 13:25:18.298654 2401 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:25:18.298914 kubelet[2401]: I0130 13:25:18.298853 2401 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:25:18.299096 kubelet[2401]: I0130 13:25:18.298890 2401 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-7-1c3f91851a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:25:18.299285 kubelet[2401]: I0130 13:25:18.299200 2401 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:25:18.299285 kubelet[2401]: I0130 13:25:18.299209 2401 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:25:18.299432 kubelet[2401]: I0130 13:25:18.299407 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:18.301838 kubelet[2401]: I0130 13:25:18.301589 2401 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:25:18.301838 kubelet[2401]: I0130 13:25:18.301619 2401 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:25:18.301838 kubelet[2401]: I0130 13:25:18.301718 2401 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:25:18.301838 kubelet[2401]: I0130 13:25:18.301728 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:25:18.305024 kubelet[2401]: W0130 13:25:18.304951 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://5.75.240.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-1c3f91851a&limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:18.305214 kubelet[2401]: E0130 13:25:18.305188 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://5.75.240.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-1c3f91851a&limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:18.305658 kubelet[2401]: I0130 13:25:18.305435 2401 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:25:18.307563 kubelet[2401]: I0130 13:25:18.307539 2401 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:25:18.308652 kubelet[2401]: W0130 13:25:18.308631 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:25:18.310552 kubelet[2401]: I0130 13:25:18.310527 2401 server.go:1269] "Started kubelet" Jan 30 13:25:18.311256 kubelet[2401]: W0130 13:25:18.311181 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://5.75.240.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:18.311256 kubelet[2401]: E0130 13:25:18.311244 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://5.75.240.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:18.311925 kubelet[2401]: I0130 13:25:18.311347 2401 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:25:18.315591 kubelet[2401]: I0130 13:25:18.315491 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:25:18.316273 kubelet[2401]: I0130 13:25:18.316237 2401 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:25:18.318764 kubelet[2401]: I0130 13:25:18.318733 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:25:18.321642 kubelet[2401]: E0130 13:25:18.319211 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://5.75.240.180:6443/api/v1/namespaces/default/events\": dial tcp 5.75.240.180:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-7-1c3f91851a.181f7b453205c13f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-7-1c3f91851a,UID:ci-4186-1-0-7-1c3f91851a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-7-1c3f91851a,},FirstTimestamp:2025-01-30 13:25:18.310498623 +0000 UTC m=+0.475016276,LastTimestamp:2025-01-30 13:25:18.310498623 +0000 UTC m=+0.475016276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-7-1c3f91851a,}" Jan 30 13:25:18.325028 kubelet[2401]: I0130 13:25:18.324959 2401 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:25:18.326154 kubelet[2401]: I0130 13:25:18.325316 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:25:18.326154 kubelet[2401]: I0130 13:25:18.325331 2401 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:25:18.326154 kubelet[2401]: E0130 13:25:18.325724 2401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-7-1c3f91851a\" not found" Jan 30 13:25:18.328017 kubelet[2401]: E0130 13:25:18.327983 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.240.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-1c3f91851a?timeout=10s\": dial tcp 5.75.240.180:6443: connect: connection refused" interval="200ms" Jan 30 13:25:18.329342 kubelet[2401]: E0130 13:25:18.328171 2401 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:25:18.329342 kubelet[2401]: I0130 13:25:18.328500 2401 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:25:18.329342 kubelet[2401]: I0130 13:25:18.328545 2401 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:25:18.329342 kubelet[2401]: W0130 13:25:18.328866 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://5.75.240.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:18.329342 kubelet[2401]: E0130 13:25:18.328910 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://5.75.240.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:18.329537 kubelet[2401]: I0130 13:25:18.329470 2401 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:25:18.329581 kubelet[2401]: I0130 13:25:18.329561 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:25:18.332187 kubelet[2401]: I0130 13:25:18.332161 2401 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:25:18.341236 kubelet[2401]: I0130 13:25:18.341190 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:25:18.342495 kubelet[2401]: I0130 13:25:18.342469 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:25:18.342610 kubelet[2401]: I0130 13:25:18.342599 2401 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:25:18.342672 kubelet[2401]: I0130 13:25:18.342663 2401 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:25:18.343024 kubelet[2401]: E0130 13:25:18.342763 2401 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:25:18.349898 kubelet[2401]: W0130 13:25:18.349843 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://5.75.240.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:18.350088 kubelet[2401]: E0130 13:25:18.350065 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://5.75.240.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:18.368487 kubelet[2401]: I0130 13:25:18.368332 2401 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:25:18.368487 kubelet[2401]: I0130 13:25:18.368352 2401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:25:18.368487 kubelet[2401]: I0130 13:25:18.368371 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:18.370805 kubelet[2401]: I0130 13:25:18.370771 2401 policy_none.go:49] "None policy: Start" Jan 30 13:25:18.371617 kubelet[2401]: I0130 13:25:18.371595 2401 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:25:18.371817 kubelet[2401]: I0130 13:25:18.371805 2401 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:25:18.379450 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:25:18.391067 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:25:18.396564 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:25:18.405984 kubelet[2401]: I0130 13:25:18.405000 2401 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:25:18.405984 kubelet[2401]: I0130 13:25:18.405312 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:25:18.405984 kubelet[2401]: I0130 13:25:18.405331 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:25:18.405984 kubelet[2401]: I0130 13:25:18.405724 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:25:18.409245 kubelet[2401]: E0130 13:25:18.409164 2401 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-7-1c3f91851a\" not found" Jan 30 13:25:18.456056 systemd[1]: Created slice kubepods-burstable-podb6d55f625e59a1a53b13ebe868fe7070.slice - libcontainer container kubepods-burstable-podb6d55f625e59a1a53b13ebe868fe7070.slice. Jan 30 13:25:18.475185 systemd[1]: Created slice kubepods-burstable-pod3f9428054fca67d8e7533eae37edcff6.slice - libcontainer container kubepods-burstable-pod3f9428054fca67d8e7533eae37edcff6.slice. Jan 30 13:25:18.479636 systemd[1]: Created slice kubepods-burstable-pod91cdcc4982a1538f03f6f54cd7fac606.slice - libcontainer container kubepods-burstable-pod91cdcc4982a1538f03f6f54cd7fac606.slice. Jan 30 13:25:18.509508 kubelet[2401]: I0130 13:25:18.507952 2401 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.509508 kubelet[2401]: E0130 13:25:18.508559 2401 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.240.180:6443/api/v1/nodes\": dial tcp 5.75.240.180:6443: connect: connection refused" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529275 kubelet[2401]: I0130 13:25:18.529238 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529516 kubelet[2401]: I0130 13:25:18.529498 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529606 kubelet[2401]: I0130 13:25:18.529592 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f9428054fca67d8e7533eae37edcff6-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-7-1c3f91851a\" (UID: \"3f9428054fca67d8e7533eae37edcff6\") " pod="kube-system/kube-scheduler-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529690 kubelet[2401]: I0130 13:25:18.529675 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6d55f625e59a1a53b13ebe868fe7070-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-7-1c3f91851a\" (UID: \"b6d55f625e59a1a53b13ebe868fe7070\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529773 kubelet[2401]: E0130 13:25:18.529301 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.240.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-1c3f91851a?timeout=10s\": dial tcp 5.75.240.180:6443: connect: connection refused" interval="400ms" Jan 30 13:25:18.529808 kubelet[2401]: I0130 13:25:18.529746 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6d55f625e59a1a53b13ebe868fe7070-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-7-1c3f91851a\" (UID: \"b6d55f625e59a1a53b13ebe868fe7070\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529896 kubelet[2401]: I0130 13:25:18.529867 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6d55f625e59a1a53b13ebe868fe7070-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-7-1c3f91851a\" (UID: \"b6d55f625e59a1a53b13ebe868fe7070\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.529959 kubelet[2401]: I0130 13:25:18.529934 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.530009 kubelet[2401]: I0130 13:25:18.529982 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.530029 kubelet[2401]: I0130 13:25:18.530007 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.711607 kubelet[2401]: I0130 13:25:18.711555 2401 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.712157 kubelet[2401]: E0130 13:25:18.712037 2401 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.240.180:6443/api/v1/nodes\": dial tcp 5.75.240.180:6443: connect: connection refused" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:18.773316 containerd[1480]: time="2025-01-30T13:25:18.772713083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-7-1c3f91851a,Uid:b6d55f625e59a1a53b13ebe868fe7070,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:18.779364 containerd[1480]: time="2025-01-30T13:25:18.779288424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-7-1c3f91851a,Uid:3f9428054fca67d8e7533eae37edcff6,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:18.783829 containerd[1480]: time="2025-01-30T13:25:18.783277687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-7-1c3f91851a,Uid:91cdcc4982a1538f03f6f54cd7fac606,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:18.930747 kubelet[2401]: E0130 13:25:18.930680 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.240.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-1c3f91851a?timeout=10s\": dial tcp 5.75.240.180:6443: connect: connection refused" interval="800ms" Jan 30 13:25:19.115547 kubelet[2401]: I0130 13:25:19.115143 2401 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:19.115736 kubelet[2401]: E0130 13:25:19.115583 2401 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.240.180:6443/api/v1/nodes\": dial tcp 5.75.240.180:6443: connect: connection refused" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:19.173303 kubelet[2401]: W0130 13:25:19.173244 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://5.75.240.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:19.173677 kubelet[2401]: E0130 13:25:19.173607 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://5.75.240.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:19.342104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393904352.mount: Deactivated successfully. Jan 30 13:25:19.349503 containerd[1480]: time="2025-01-30T13:25:19.349366607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:19.350896 containerd[1480]: time="2025-01-30T13:25:19.350837314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 13:25:19.353940 containerd[1480]: time="2025-01-30T13:25:19.353821089Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:19.355700 containerd[1480]: time="2025-01-30T13:25:19.355603090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:19.358228 containerd[1480]: time="2025-01-30T13:25:19.358036481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:25:19.364305 containerd[1480]: time="2025-01-30T13:25:19.364131678Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:19.366689 containerd[1480]: time="2025-01-30T13:25:19.366391060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:19.369598 containerd[1480]: time="2025-01-30T13:25:19.368885574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.510425ms" Jan 30 13:25:19.369598 containerd[1480]: time="2025-01-30T13:25:19.369299112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:25:19.372268 containerd[1480]: time="2025-01-30T13:25:19.372011876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.180707ms" Jan 30 13:25:19.394646 containerd[1480]: time="2025-01-30T13:25:19.394293248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.905516ms" Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504318325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504384208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504400689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504238201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504327005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504338646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:19.504711 containerd[1480]: time="2025-01-30T13:25:19.504475932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:19.506786 containerd[1480]: time="2025-01-30T13:25:19.505797752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:19.506786 containerd[1480]: time="2025-01-30T13:25:19.505878516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:19.506786 containerd[1480]: time="2025-01-30T13:25:19.505890036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:19.506786 containerd[1480]: time="2025-01-30T13:25:19.505969200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:19.507774 containerd[1480]: time="2025-01-30T13:25:19.506762356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:19.531669 systemd[1]: Started cri-containerd-5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f.scope - libcontainer container 5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f. Jan 30 13:25:19.541157 systemd[1]: Started cri-containerd-df179ed28bef7b72488520ab155bdca624523b4d6973f03a7c5b55cc1a50aafd.scope - libcontainer container df179ed28bef7b72488520ab155bdca624523b4d6973f03a7c5b55cc1a50aafd. Jan 30 13:25:19.546244 systemd[1]: Started cri-containerd-63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115.scope - libcontainer container 63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115. Jan 30 13:25:19.598527 containerd[1480]: time="2025-01-30T13:25:19.598377957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-7-1c3f91851a,Uid:b6d55f625e59a1a53b13ebe868fe7070,Namespace:kube-system,Attempt:0,} returns sandbox id \"df179ed28bef7b72488520ab155bdca624523b4d6973f03a7c5b55cc1a50aafd\"" Jan 30 13:25:19.601350 containerd[1480]: time="2025-01-30T13:25:19.601211406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-7-1c3f91851a,Uid:3f9428054fca67d8e7533eae37edcff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f\"" Jan 30 13:25:19.603481 kubelet[2401]: W0130 13:25:19.603366 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://5.75.240.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:19.603593 kubelet[2401]: E0130 13:25:19.603505 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://5.75.240.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:19.606191 containerd[1480]: time="2025-01-30T13:25:19.606145550Z" level=info msg="CreateContainer within sandbox \"df179ed28bef7b72488520ab155bdca624523b4d6973f03a7c5b55cc1a50aafd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:25:19.607790 containerd[1480]: time="2025-01-30T13:25:19.607748543Z" level=info msg="CreateContainer within sandbox \"5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:25:19.610932 containerd[1480]: time="2025-01-30T13:25:19.610697877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-7-1c3f91851a,Uid:91cdcc4982a1538f03f6f54cd7fac606,Namespace:kube-system,Attempt:0,} returns sandbox id \"63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115\"" Jan 30 13:25:19.614750 containerd[1480]: time="2025-01-30T13:25:19.614710499Z" level=info msg="CreateContainer within sandbox \"63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:25:19.631776 kubelet[2401]: W0130 13:25:19.631639 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://5.75.240.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-1c3f91851a&limit=500&resourceVersion=0": dial tcp 5.75.240.180:6443: connect: connection refused Jan 30 13:25:19.632050 kubelet[2401]: E0130 13:25:19.631930 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://5.75.240.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-7-1c3f91851a&limit=500&resourceVersion=0\": dial tcp 5.75.240.180:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:25:19.633204 containerd[1480]: time="2025-01-30T13:25:19.633155337Z" level=info msg="CreateContainer within sandbox \"5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621\"" Jan 30 13:25:19.634077 containerd[1480]: time="2025-01-30T13:25:19.633867809Z" level=info msg="StartContainer for \"7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621\"" Jan 30 13:25:19.638627 containerd[1480]: time="2025-01-30T13:25:19.638577983Z" level=info msg="CreateContainer within sandbox \"df179ed28bef7b72488520ab155bdca624523b4d6973f03a7c5b55cc1a50aafd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a62f3bce2d6008db7b257376390f00895f8b6ac710ed66270ce9a0ce504f678d\"" Jan 30 13:25:19.640167 containerd[1480]: time="2025-01-30T13:25:19.639083006Z" level=info msg="StartContainer for \"a62f3bce2d6008db7b257376390f00895f8b6ac710ed66270ce9a0ce504f678d\"" Jan 30 13:25:19.648237 containerd[1480]: time="2025-01-30T13:25:19.648163778Z" level=info msg="CreateContainer within sandbox \"63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4\"" Jan 30 13:25:19.648824 containerd[1480]: time="2025-01-30T13:25:19.648787247Z" level=info msg="StartContainer for \"ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4\"" Jan 30 13:25:19.670627 systemd[1]: Started cri-containerd-7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621.scope - libcontainer container 7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621. Jan 30 13:25:19.688650 systemd[1]: Started cri-containerd-a62f3bce2d6008db7b257376390f00895f8b6ac710ed66270ce9a0ce504f678d.scope - libcontainer container a62f3bce2d6008db7b257376390f00895f8b6ac710ed66270ce9a0ce504f678d. Jan 30 13:25:19.706321 systemd[1]: Started cri-containerd-ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4.scope - libcontainer container ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4. Jan 30 13:25:19.729809 containerd[1480]: time="2025-01-30T13:25:19.729760604Z" level=info msg="StartContainer for \"7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621\" returns successfully" Jan 30 13:25:19.731929 kubelet[2401]: E0130 13:25:19.731855 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.240.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-7-1c3f91851a?timeout=10s\": dial tcp 5.75.240.180:6443: connect: connection refused" interval="1.6s" Jan 30 13:25:19.744854 containerd[1480]: time="2025-01-30T13:25:19.744805048Z" level=info msg="StartContainer for \"a62f3bce2d6008db7b257376390f00895f8b6ac710ed66270ce9a0ce504f678d\" returns successfully" Jan 30 13:25:19.774321 containerd[1480]: time="2025-01-30T13:25:19.774249265Z" level=info msg="StartContainer for \"ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4\" returns successfully" Jan 30 13:25:19.919539 kubelet[2401]: I0130 13:25:19.919132 2401 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:22.738641 kubelet[2401]: E0130 13:25:22.738599 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-7-1c3f91851a\" not found" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:22.808260 kubelet[2401]: I0130 13:25:22.807189 2401 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:22.869172 kubelet[2401]: E0130 13:25:22.868757 2401 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-7-1c3f91851a.181f7b453205c13f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-7-1c3f91851a,UID:ci-4186-1-0-7-1c3f91851a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-7-1c3f91851a,},FirstTimestamp:2025-01-30 13:25:18.310498623 +0000 UTC m=+0.475016276,LastTimestamp:2025-01-30 13:25:18.310498623 +0000 UTC m=+0.475016276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-7-1c3f91851a,}" Jan 30 13:25:22.927952 kubelet[2401]: E0130 13:25:22.927692 2401 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-7-1c3f91851a.181f7b4533134010 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-7-1c3f91851a,UID:ci-4186-1-0-7-1c3f91851a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-7-1c3f91851a,},FirstTimestamp:2025-01-30 13:25:18.328160272 +0000 UTC m=+0.492677925,LastTimestamp:2025-01-30 13:25:18.328160272 +0000 UTC m=+0.492677925,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-7-1c3f91851a,}" Jan 30 13:25:23.313227 kubelet[2401]: I0130 13:25:23.312905 2401 apiserver.go:52] "Watching apiserver" Jan 30 13:25:23.328840 kubelet[2401]: I0130 13:25:23.328749 2401 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:25:24.790457 systemd[1]: Reloading requested from client PID 2679 ('systemctl') (unit session-7.scope)... Jan 30 13:25:24.790478 systemd[1]: Reloading... Jan 30 13:25:24.891452 zram_generator::config[2725]: No configuration found. Jan 30 13:25:24.991942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:25:25.079060 systemd[1]: Reloading finished in 288 ms. Jan 30 13:25:25.115151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:25.132307 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:25:25.132864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:25.141908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:25.261564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:25.273146 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:25:25.317300 kubelet[2764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:25.317300 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:25:25.317300 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:25.317300 kubelet[2764]: I0130 13:25:25.317279 2764 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:25:25.328209 kubelet[2764]: I0130 13:25:25.327681 2764 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:25:25.328209 kubelet[2764]: I0130 13:25:25.327713 2764 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:25:25.328209 kubelet[2764]: I0130 13:25:25.327943 2764 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:25:25.330700 kubelet[2764]: I0130 13:25:25.329450 2764 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:25:25.333384 kubelet[2764]: I0130 13:25:25.332841 2764 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:25:25.336038 kubelet[2764]: E0130 13:25:25.336002 2764 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:25:25.336038 kubelet[2764]: I0130 13:25:25.336034 2764 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:25:25.338442 kubelet[2764]: I0130 13:25:25.338381 2764 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:25:25.338568 kubelet[2764]: I0130 13:25:25.338515 2764 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:25:25.338784 kubelet[2764]: I0130 13:25:25.338637 2764 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:25:25.338848 kubelet[2764]: I0130 13:25:25.338667 2764 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-7-1c3f91851a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:25:25.338926 kubelet[2764]: I0130 13:25:25.338852 2764 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:25:25.338926 kubelet[2764]: I0130 13:25:25.338862 2764 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:25:25.338926 kubelet[2764]: I0130 13:25:25.338893 2764 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:25.339337 kubelet[2764]: I0130 13:25:25.339016 2764 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:25:25.339337 kubelet[2764]: I0130 13:25:25.339040 2764 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:25:25.339337 kubelet[2764]: I0130 13:25:25.339066 2764 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:25:25.339337 kubelet[2764]: I0130 13:25:25.339082 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:25:25.341505 kubelet[2764]: I0130 13:25:25.340908 2764 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:25:25.341505 kubelet[2764]: I0130 13:25:25.341377 2764 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:25:25.341914 kubelet[2764]: I0130 13:25:25.341891 2764 server.go:1269] "Started kubelet" Jan 30 13:25:25.348375 kubelet[2764]: I0130 13:25:25.345826 2764 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:25:25.351583 kubelet[2764]: I0130 13:25:25.351503 2764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:25:25.352080 kubelet[2764]: I0130 13:25:25.352065 2764 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:25:25.353518 kubelet[2764]: I0130 13:25:25.353487 2764 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:25:25.355849 kubelet[2764]: I0130 13:25:25.355834 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:25:25.359463 kubelet[2764]: I0130 13:25:25.359435 2764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:25:25.362796 kubelet[2764]: I0130 13:25:25.362769 2764 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:25:25.363162 kubelet[2764]: E0130 13:25:25.363139 2764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-7-1c3f91851a\" not found" Jan 30 13:25:25.376740 kubelet[2764]: I0130 13:25:25.376710 2764 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:25:25.376966 kubelet[2764]: I0130 13:25:25.376945 2764 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:25:25.377836 kubelet[2764]: I0130 13:25:25.377816 2764 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:25:25.378062 kubelet[2764]: I0130 13:25:25.378051 2764 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:25:25.388187 kubelet[2764]: I0130 13:25:25.388155 2764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:25:25.389751 kubelet[2764]: I0130 13:25:25.389583 2764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:25:25.389898 kubelet[2764]: I0130 13:25:25.389883 2764 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:25:25.389978 kubelet[2764]: I0130 13:25:25.389969 2764 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:25:25.390116 kubelet[2764]: E0130 13:25:25.390086 2764 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:25:25.393683 kubelet[2764]: I0130 13:25:25.393652 2764 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.465295 2764 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.465339 2764 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.465362 2764 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.465948 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.465980 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.466009 2764 policy_none.go:49] "None policy: Start" Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.466751 2764 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:25:25.466897 kubelet[2764]: I0130 13:25:25.466772 2764 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:25:25.467248 kubelet[2764]: I0130 13:25:25.467001 2764 state_mem.go:75] "Updated machine memory state" Jan 30 13:25:25.473572 kubelet[2764]: I0130 13:25:25.473535 2764 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:25:25.474083 kubelet[2764]: I0130 13:25:25.473774 2764 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:25:25.474083 kubelet[2764]: I0130 13:25:25.473793 2764 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:25:25.474703 kubelet[2764]: I0130 13:25:25.474466 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:25:25.580213 kubelet[2764]: I0130 13:25:25.579916 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6d55f625e59a1a53b13ebe868fe7070-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-7-1c3f91851a\" (UID: \"b6d55f625e59a1a53b13ebe868fe7070\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580213 kubelet[2764]: I0130 13:25:25.579959 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6d55f625e59a1a53b13ebe868fe7070-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-7-1c3f91851a\" (UID: \"b6d55f625e59a1a53b13ebe868fe7070\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580213 kubelet[2764]: I0130 13:25:25.579980 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580213 kubelet[2764]: I0130 13:25:25.580006 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580213 kubelet[2764]: I0130 13:25:25.580028 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f9428054fca67d8e7533eae37edcff6-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-7-1c3f91851a\" (UID: \"3f9428054fca67d8e7533eae37edcff6\") " pod="kube-system/kube-scheduler-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580958 kubelet[2764]: I0130 13:25:25.580050 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6d55f625e59a1a53b13ebe868fe7070-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-7-1c3f91851a\" (UID: \"b6d55f625e59a1a53b13ebe868fe7070\") " pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580958 kubelet[2764]: I0130 13:25:25.580070 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580958 kubelet[2764]: I0130 13:25:25.580089 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.580958 kubelet[2764]: I0130 13:25:25.580108 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91cdcc4982a1538f03f6f54cd7fac606-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-7-1c3f91851a\" (UID: \"91cdcc4982a1538f03f6f54cd7fac606\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.584598 kubelet[2764]: I0130 13:25:25.584577 2764 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.596165 kubelet[2764]: I0130 13:25:25.596120 2764 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.596821 kubelet[2764]: I0130 13:25:25.596253 2764 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-7-1c3f91851a" Jan 30 13:25:25.793635 sudo[2797]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:25:25.794326 sudo[2797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:25:26.264897 sudo[2797]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:26.351344 kubelet[2764]: I0130 13:25:26.351289 2764 apiserver.go:52] "Watching apiserver" Jan 30 13:25:26.378559 kubelet[2764]: I0130 13:25:26.378522 2764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:25:26.436058 kubelet[2764]: I0130 13:25:26.435361 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" podStartSLOduration=1.43534126 podStartE2EDuration="1.43534126s" podCreationTimestamp="2025-01-30 13:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:26.416176157 +0000 UTC m=+1.139243261" watchObservedRunningTime="2025-01-30 13:25:26.43534126 +0000 UTC m=+1.158408364" Jan 30 13:25:26.450836 kubelet[2764]: I0130 13:25:26.450759 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-7-1c3f91851a" podStartSLOduration=1.450737801 podStartE2EDuration="1.450737801s" podCreationTimestamp="2025-01-30 13:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:26.436810483 +0000 UTC m=+1.159877587" watchObservedRunningTime="2025-01-30 13:25:26.450737801 +0000 UTC m=+1.173804905" Jan 30 13:25:26.474410 kubelet[2764]: I0130 13:25:26.473804 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-7-1c3f91851a" podStartSLOduration=1.47378495 podStartE2EDuration="1.47378495s" podCreationTimestamp="2025-01-30 13:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:26.452055377 +0000 UTC m=+1.175122481" watchObservedRunningTime="2025-01-30 13:25:26.47378495 +0000 UTC m=+1.196852014" Jan 30 13:25:28.348284 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:28.507091 sshd[1877]: Connection closed by 139.178.68.195 port 50730 Jan 30 13:25:28.507924 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:28.513403 systemd[1]: sshd@6-5.75.240.180:22-139.178.68.195:50730.service: Deactivated successfully. Jan 30 13:25:28.517161 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:25:28.517975 systemd[1]: session-7.scope: Consumed 7.592s CPU time, 154.4M memory peak, 0B memory swap peak. Jan 30 13:25:28.520089 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:25:28.522863 systemd-logind[1463]: Removed session 7. Jan 30 13:25:29.999707 kubelet[2764]: I0130 13:25:29.999630 2764 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:25:30.000875 containerd[1480]: time="2025-01-30T13:25:30.000633998Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:25:30.001259 kubelet[2764]: I0130 13:25:30.001073 2764 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:25:31.017209 kubelet[2764]: I0130 13:25:31.017045 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfef465b-2078-4232-b85b-435315000cde-kube-proxy\") pod \"kube-proxy-4dm5h\" (UID: \"dfef465b-2078-4232-b85b-435315000cde\") " pod="kube-system/kube-proxy-4dm5h" Jan 30 13:25:31.017209 kubelet[2764]: I0130 13:25:31.017087 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfef465b-2078-4232-b85b-435315000cde-xtables-lock\") pod \"kube-proxy-4dm5h\" (UID: \"dfef465b-2078-4232-b85b-435315000cde\") " pod="kube-system/kube-proxy-4dm5h" Jan 30 13:25:31.017209 kubelet[2764]: I0130 13:25:31.017107 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfef465b-2078-4232-b85b-435315000cde-lib-modules\") pod \"kube-proxy-4dm5h\" (UID: \"dfef465b-2078-4232-b85b-435315000cde\") " pod="kube-system/kube-proxy-4dm5h" Jan 30 13:25:31.017209 kubelet[2764]: I0130 13:25:31.017125 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djjcb\" (UniqueName: \"kubernetes.io/projected/dfef465b-2078-4232-b85b-435315000cde-kube-api-access-djjcb\") pod \"kube-proxy-4dm5h\" (UID: \"dfef465b-2078-4232-b85b-435315000cde\") " pod="kube-system/kube-proxy-4dm5h" Jan 30 13:25:31.017487 systemd[1]: Created slice kubepods-besteffort-poddfef465b_2078_4232_b85b_435315000cde.slice - libcontainer container kubepods-besteffort-poddfef465b_2078_4232_b85b_435315000cde.slice. Jan 30 13:25:31.042344 systemd[1]: Created slice kubepods-burstable-pod2234b10f_b80c_458e_afef_276177edd3c4.slice - libcontainer container kubepods-burstable-pod2234b10f_b80c_458e_afef_276177edd3c4.slice. Jan 30 13:25:31.119986 kubelet[2764]: I0130 13:25:31.119218 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kzrb\" (UniqueName: \"kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-kube-api-access-5kzrb\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.119986 kubelet[2764]: I0130 13:25:31.119271 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-bpf-maps\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.119986 kubelet[2764]: I0130 13:25:31.119299 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2234b10f-b80c-458e-afef-276177edd3c4-cilium-config-path\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.119986 kubelet[2764]: I0130 13:25:31.119324 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-cgroup\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.119986 kubelet[2764]: I0130 13:25:31.119338 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-kernel\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.119986 kubelet[2764]: I0130 13:25:31.119351 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cni-path\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120239 kubelet[2764]: I0130 13:25:31.119365 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2234b10f-b80c-458e-afef-276177edd3c4-clustermesh-secrets\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120239 kubelet[2764]: I0130 13:25:31.119389 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-hostproc\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120239 kubelet[2764]: I0130 13:25:31.119425 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-lib-modules\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120239 kubelet[2764]: I0130 13:25:31.119443 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-run\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120239 kubelet[2764]: I0130 13:25:31.119458 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-xtables-lock\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120239 kubelet[2764]: I0130 13:25:31.119471 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-etc-cni-netd\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120361 kubelet[2764]: I0130 13:25:31.119486 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-net\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.120361 kubelet[2764]: I0130 13:25:31.119502 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-hubble-tls\") pod \"cilium-5lhkb\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " pod="kube-system/cilium-5lhkb" Jan 30 13:25:31.283273 systemd[1]: Created slice kubepods-besteffort-pod4a95f356_dc60_4731_bce8_a3cb503f68e3.slice - libcontainer container kubepods-besteffort-pod4a95f356_dc60_4731_bce8_a3cb503f68e3.slice. Jan 30 13:25:31.320656 kubelet[2764]: I0130 13:25:31.320576 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a95f356-dc60-4731-bce8-a3cb503f68e3-cilium-config-path\") pod \"cilium-operator-5d85765b45-swtxx\" (UID: \"4a95f356-dc60-4731-bce8-a3cb503f68e3\") " pod="kube-system/cilium-operator-5d85765b45-swtxx" Jan 30 13:25:31.320656 kubelet[2764]: I0130 13:25:31.320666 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hll\" (UniqueName: \"kubernetes.io/projected/4a95f356-dc60-4731-bce8-a3cb503f68e3-kube-api-access-r2hll\") pod \"cilium-operator-5d85765b45-swtxx\" (UID: \"4a95f356-dc60-4731-bce8-a3cb503f68e3\") " pod="kube-system/cilium-operator-5d85765b45-swtxx" Jan 30 13:25:31.328220 containerd[1480]: time="2025-01-30T13:25:31.327684774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dm5h,Uid:dfef465b-2078-4232-b85b-435315000cde,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:31.350476 containerd[1480]: time="2025-01-30T13:25:31.350140985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lhkb,Uid:2234b10f-b80c-458e-afef-276177edd3c4,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:31.358216 containerd[1480]: time="2025-01-30T13:25:31.358051473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:31.358216 containerd[1480]: time="2025-01-30T13:25:31.358144077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:31.358888 containerd[1480]: time="2025-01-30T13:25:31.358633377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:31.360118 containerd[1480]: time="2025-01-30T13:25:31.359892149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:31.383302 containerd[1480]: time="2025-01-30T13:25:31.383194276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:31.383584 containerd[1480]: time="2025-01-30T13:25:31.383307000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:31.383584 containerd[1480]: time="2025-01-30T13:25:31.383339922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:31.383584 containerd[1480]: time="2025-01-30T13:25:31.383541690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:31.385539 systemd[1]: Started cri-containerd-b71d9fa51d504fd6bc1412b517b8469e2fbf386b8daba42d1a59139f6d258a85.scope - libcontainer container b71d9fa51d504fd6bc1412b517b8469e2fbf386b8daba42d1a59139f6d258a85. Jan 30 13:25:31.411006 systemd[1]: Started cri-containerd-e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78.scope - libcontainer container e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78. Jan 30 13:25:31.429587 containerd[1480]: time="2025-01-30T13:25:31.428514075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dm5h,Uid:dfef465b-2078-4232-b85b-435315000cde,Namespace:kube-system,Attempt:0,} returns sandbox id \"b71d9fa51d504fd6bc1412b517b8469e2fbf386b8daba42d1a59139f6d258a85\"" Jan 30 13:25:31.436263 containerd[1480]: time="2025-01-30T13:25:31.436214674Z" level=info msg="CreateContainer within sandbox \"b71d9fa51d504fd6bc1412b517b8469e2fbf386b8daba42d1a59139f6d258a85\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:25:31.463852 containerd[1480]: time="2025-01-30T13:25:31.463651251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lhkb,Uid:2234b10f-b80c-458e-afef-276177edd3c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\"" Jan 30 13:25:31.475059 containerd[1480]: time="2025-01-30T13:25:31.474912078Z" level=info msg="CreateContainer within sandbox \"b71d9fa51d504fd6bc1412b517b8469e2fbf386b8daba42d1a59139f6d258a85\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db85ee372e74eadb4227d15cedb9c07bdc1c50b9bc58883c6601eb501f735bb4\"" Jan 30 13:25:31.475367 containerd[1480]: time="2025-01-30T13:25:31.475236532Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:25:31.476248 containerd[1480]: time="2025-01-30T13:25:31.476201452Z" level=info msg="StartContainer for \"db85ee372e74eadb4227d15cedb9c07bdc1c50b9bc58883c6601eb501f735bb4\"" Jan 30 13:25:31.507654 systemd[1]: Started cri-containerd-db85ee372e74eadb4227d15cedb9c07bdc1c50b9bc58883c6601eb501f735bb4.scope - libcontainer container db85ee372e74eadb4227d15cedb9c07bdc1c50b9bc58883c6601eb501f735bb4. Jan 30 13:25:31.545494 containerd[1480]: time="2025-01-30T13:25:31.544345637Z" level=info msg="StartContainer for \"db85ee372e74eadb4227d15cedb9c07bdc1c50b9bc58883c6601eb501f735bb4\" returns successfully" Jan 30 13:25:31.589156 containerd[1480]: time="2025-01-30T13:25:31.588663554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-swtxx,Uid:4a95f356-dc60-4731-bce8-a3cb503f68e3,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:31.622039 containerd[1480]: time="2025-01-30T13:25:31.621678723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:31.622039 containerd[1480]: time="2025-01-30T13:25:31.621748406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:31.622039 containerd[1480]: time="2025-01-30T13:25:31.621763487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:31.622887 containerd[1480]: time="2025-01-30T13:25:31.622718406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:31.649837 systemd[1]: Started cri-containerd-2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf.scope - libcontainer container 2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf. Jan 30 13:25:31.697098 containerd[1480]: time="2025-01-30T13:25:31.696302217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-swtxx,Uid:4a95f356-dc60-4731-bce8-a3cb503f68e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\"" Jan 30 13:25:33.322216 kubelet[2764]: I0130 13:25:33.321814 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4dm5h" podStartSLOduration=3.321793936 podStartE2EDuration="3.321793936s" podCreationTimestamp="2025-01-30 13:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:32.466976285 +0000 UTC m=+7.190043389" watchObservedRunningTime="2025-01-30 13:25:33.321793936 +0000 UTC m=+8.044861040" Jan 30 13:25:36.440461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478593094.mount: Deactivated successfully. Jan 30 13:25:37.815761 containerd[1480]: time="2025-01-30T13:25:37.815675524Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:37.817833 containerd[1480]: time="2025-01-30T13:25:37.817539319Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:25:37.818920 containerd[1480]: time="2025-01-30T13:25:37.818879772Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:37.821359 containerd[1480]: time="2025-01-30T13:25:37.821235226Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.345692202s" Jan 30 13:25:37.821359 containerd[1480]: time="2025-01-30T13:25:37.821286668Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:25:37.823679 containerd[1480]: time="2025-01-30T13:25:37.823444635Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:25:37.827114 containerd[1480]: time="2025-01-30T13:25:37.826912933Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:25:37.845810 containerd[1480]: time="2025-01-30T13:25:37.845709925Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40\"" Jan 30 13:25:37.846433 containerd[1480]: time="2025-01-30T13:25:37.846253227Z" level=info msg="StartContainer for \"db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40\"" Jan 30 13:25:37.875688 systemd[1]: Started cri-containerd-db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40.scope - libcontainer container db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40. Jan 30 13:25:37.904903 containerd[1480]: time="2025-01-30T13:25:37.904755085Z" level=info msg="StartContainer for \"db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40\" returns successfully" Jan 30 13:25:37.923028 systemd[1]: cri-containerd-db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40.scope: Deactivated successfully. Jan 30 13:25:38.106010 containerd[1480]: time="2025-01-30T13:25:38.105690176Z" level=info msg="shim disconnected" id=db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40 namespace=k8s.io Jan 30 13:25:38.106010 containerd[1480]: time="2025-01-30T13:25:38.105754458Z" level=warning msg="cleaning up after shim disconnected" id=db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40 namespace=k8s.io Jan 30 13:25:38.106010 containerd[1480]: time="2025-01-30T13:25:38.105767179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:38.478589 containerd[1480]: time="2025-01-30T13:25:38.478435276Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:25:38.492769 containerd[1480]: time="2025-01-30T13:25:38.492636360Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0\"" Jan 30 13:25:38.494132 containerd[1480]: time="2025-01-30T13:25:38.493232904Z" level=info msg="StartContainer for \"21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0\"" Jan 30 13:25:38.524731 systemd[1]: Started cri-containerd-21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0.scope - libcontainer container 21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0. Jan 30 13:25:38.553489 containerd[1480]: time="2025-01-30T13:25:38.553407536Z" level=info msg="StartContainer for \"21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0\" returns successfully" Jan 30 13:25:38.567525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:25:38.568086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:25:38.568196 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:25:38.576913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:25:38.577181 systemd[1]: cri-containerd-21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0.scope: Deactivated successfully. Jan 30 13:25:38.599197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:25:38.608695 containerd[1480]: time="2025-01-30T13:25:38.608574850Z" level=info msg="shim disconnected" id=21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0 namespace=k8s.io Jan 30 13:25:38.608695 containerd[1480]: time="2025-01-30T13:25:38.608645213Z" level=warning msg="cleaning up after shim disconnected" id=21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0 namespace=k8s.io Jan 30 13:25:38.608695 containerd[1480]: time="2025-01-30T13:25:38.608653533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:38.841286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40-rootfs.mount: Deactivated successfully. Jan 30 13:25:39.486903 containerd[1480]: time="2025-01-30T13:25:39.486837625Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:25:39.510855 containerd[1480]: time="2025-01-30T13:25:39.510774172Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360\"" Jan 30 13:25:39.511874 containerd[1480]: time="2025-01-30T13:25:39.511671407Z" level=info msg="StartContainer for \"5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360\"" Jan 30 13:25:39.558758 systemd[1]: Started cri-containerd-5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360.scope - libcontainer container 5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360. Jan 30 13:25:39.596238 containerd[1480]: time="2025-01-30T13:25:39.596097706Z" level=info msg="StartContainer for \"5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360\" returns successfully" Jan 30 13:25:39.596854 systemd[1]: cri-containerd-5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360.scope: Deactivated successfully. Jan 30 13:25:39.627149 containerd[1480]: time="2025-01-30T13:25:39.626533549Z" level=info msg="shim disconnected" id=5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360 namespace=k8s.io Jan 30 13:25:39.627149 containerd[1480]: time="2025-01-30T13:25:39.626608432Z" level=warning msg="cleaning up after shim disconnected" id=5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360 namespace=k8s.io Jan 30 13:25:39.627149 containerd[1480]: time="2025-01-30T13:25:39.626619553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:39.838461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360-rootfs.mount: Deactivated successfully. Jan 30 13:25:40.491910 containerd[1480]: time="2025-01-30T13:25:40.491843507Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:25:40.516251 containerd[1480]: time="2025-01-30T13:25:40.516182545Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069\"" Jan 30 13:25:40.517520 containerd[1480]: time="2025-01-30T13:25:40.517017858Z" level=info msg="StartContainer for \"8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069\"" Jan 30 13:25:40.547611 systemd[1]: Started cri-containerd-8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069.scope - libcontainer container 8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069. Jan 30 13:25:40.573787 systemd[1]: cri-containerd-8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069.scope: Deactivated successfully. Jan 30 13:25:40.575515 containerd[1480]: time="2025-01-30T13:25:40.575378593Z" level=info msg="StartContainer for \"8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069\" returns successfully" Jan 30 13:25:40.604005 containerd[1480]: time="2025-01-30T13:25:40.603627345Z" level=info msg="shim disconnected" id=8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069 namespace=k8s.io Jan 30 13:25:40.604005 containerd[1480]: time="2025-01-30T13:25:40.603727189Z" level=warning msg="cleaning up after shim disconnected" id=8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069 namespace=k8s.io Jan 30 13:25:40.604005 containerd[1480]: time="2025-01-30T13:25:40.603737709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:40.838212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069-rootfs.mount: Deactivated successfully. Jan 30 13:25:40.921451 containerd[1480]: time="2025-01-30T13:25:40.920955908Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:40.922670 containerd[1480]: time="2025-01-30T13:25:40.922614373Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:25:40.924451 containerd[1480]: time="2025-01-30T13:25:40.924166634Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:40.927213 containerd[1480]: time="2025-01-30T13:25:40.927026507Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.10354271s" Jan 30 13:25:40.927213 containerd[1480]: time="2025-01-30T13:25:40.927099550Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:25:40.931446 containerd[1480]: time="2025-01-30T13:25:40.931259593Z" level=info msg="CreateContainer within sandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:25:40.953692 containerd[1480]: time="2025-01-30T13:25:40.953646874Z" level=info msg="CreateContainer within sandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\"" Jan 30 13:25:40.954832 containerd[1480]: time="2025-01-30T13:25:40.954624112Z" level=info msg="StartContainer for \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\"" Jan 30 13:25:40.987645 systemd[1]: Started cri-containerd-d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23.scope - libcontainer container d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23. Jan 30 13:25:41.017126 containerd[1480]: time="2025-01-30T13:25:41.016895319Z" level=info msg="StartContainer for \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\" returns successfully" Jan 30 13:25:41.500484 containerd[1480]: time="2025-01-30T13:25:41.498494808Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:25:41.523998 containerd[1480]: time="2025-01-30T13:25:41.523923523Z" level=info msg="CreateContainer within sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\"" Jan 30 13:25:41.525111 containerd[1480]: time="2025-01-30T13:25:41.524974564Z" level=info msg="StartContainer for \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\"" Jan 30 13:25:41.570653 systemd[1]: Started cri-containerd-28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb.scope - libcontainer container 28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb. Jan 30 13:25:41.649830 containerd[1480]: time="2025-01-30T13:25:41.649782329Z" level=info msg="StartContainer for \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\" returns successfully" Jan 30 13:25:41.772190 kubelet[2764]: I0130 13:25:41.771786 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-swtxx" podStartSLOduration=1.5443524320000002 podStartE2EDuration="10.771767423s" podCreationTimestamp="2025-01-30 13:25:31 +0000 UTC" firstStartedPulling="2025-01-30 13:25:31.700526432 +0000 UTC m=+6.423593536" lastFinishedPulling="2025-01-30 13:25:40.927941423 +0000 UTC m=+15.651008527" observedRunningTime="2025-01-30 13:25:41.623358494 +0000 UTC m=+16.346425598" watchObservedRunningTime="2025-01-30 13:25:41.771767423 +0000 UTC m=+16.494834527" Jan 30 13:25:41.821512 kubelet[2764]: I0130 13:25:41.821281 2764 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:25:41.994578 systemd[1]: Created slice kubepods-burstable-pod4eed96d9_26c3_4188_99de_a7d234f431f2.slice - libcontainer container kubepods-burstable-pod4eed96d9_26c3_4188_99de_a7d234f431f2.slice. Jan 30 13:25:41.996768 kubelet[2764]: I0130 13:25:41.996729 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eed96d9-26c3-4188-99de-a7d234f431f2-config-volume\") pod \"coredns-6f6b679f8f-gktft\" (UID: \"4eed96d9-26c3-4188-99de-a7d234f431f2\") " pod="kube-system/coredns-6f6b679f8f-gktft" Jan 30 13:25:41.996870 kubelet[2764]: I0130 13:25:41.996770 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssp5x\" (UniqueName: \"kubernetes.io/projected/4eed96d9-26c3-4188-99de-a7d234f431f2-kube-api-access-ssp5x\") pod \"coredns-6f6b679f8f-gktft\" (UID: \"4eed96d9-26c3-4188-99de-a7d234f431f2\") " pod="kube-system/coredns-6f6b679f8f-gktft" Jan 30 13:25:42.005384 systemd[1]: Created slice kubepods-burstable-pod25de5240_14a4_4fcc_b8d8_891c7f9cd6e8.slice - libcontainer container kubepods-burstable-pod25de5240_14a4_4fcc_b8d8_891c7f9cd6e8.slice. Jan 30 13:25:42.097614 kubelet[2764]: I0130 13:25:42.097558 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxk8n\" (UniqueName: \"kubernetes.io/projected/25de5240-14a4-4fcc-b8d8-891c7f9cd6e8-kube-api-access-wxk8n\") pod \"coredns-6f6b679f8f-2rlbd\" (UID: \"25de5240-14a4-4fcc-b8d8-891c7f9cd6e8\") " pod="kube-system/coredns-6f6b679f8f-2rlbd" Jan 30 13:25:42.097754 kubelet[2764]: I0130 13:25:42.097638 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25de5240-14a4-4fcc-b8d8-891c7f9cd6e8-config-volume\") pod \"coredns-6f6b679f8f-2rlbd\" (UID: \"25de5240-14a4-4fcc-b8d8-891c7f9cd6e8\") " pod="kube-system/coredns-6f6b679f8f-2rlbd" Jan 30 13:25:42.303897 containerd[1480]: time="2025-01-30T13:25:42.303572778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gktft,Uid:4eed96d9-26c3-4188-99de-a7d234f431f2,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:42.312092 containerd[1480]: time="2025-01-30T13:25:42.311405723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rlbd,Uid:25de5240-14a4-4fcc-b8d8-891c7f9cd6e8,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:44.902997 systemd-networkd[1378]: cilium_host: Link UP Jan 30 13:25:44.903643 systemd-networkd[1378]: cilium_net: Link UP Jan 30 13:25:44.904459 systemd-networkd[1378]: cilium_net: Gained carrier Jan 30 13:25:44.904748 systemd-networkd[1378]: cilium_host: Gained carrier Jan 30 13:25:45.013757 systemd-networkd[1378]: cilium_vxlan: Link UP Jan 30 13:25:45.014176 systemd-networkd[1378]: cilium_vxlan: Gained carrier Jan 30 13:25:45.259272 systemd-networkd[1378]: cilium_host: Gained IPv6LL Jan 30 13:25:45.294479 kernel: NET: Registered PF_ALG protocol family Jan 30 13:25:45.729712 systemd-networkd[1378]: cilium_net: Gained IPv6LL Jan 30 13:25:46.028030 systemd-networkd[1378]: lxc_health: Link UP Jan 30 13:25:46.041352 systemd-networkd[1378]: lxc_health: Gained carrier Jan 30 13:25:46.400636 systemd-networkd[1378]: lxcae0cfdfc74c8: Link UP Jan 30 13:25:46.419158 systemd-networkd[1378]: lxc3f99dab0400a: Link UP Jan 30 13:25:46.420442 kernel: eth0: renamed from tmp09f3a Jan 30 13:25:46.425508 kernel: eth0: renamed from tmpbfbc7 Jan 30 13:25:46.431486 systemd-networkd[1378]: lxcae0cfdfc74c8: Gained carrier Jan 30 13:25:46.435212 systemd-networkd[1378]: lxc3f99dab0400a: Gained carrier Jan 30 13:25:46.945760 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Jan 30 13:25:47.374699 kubelet[2764]: I0130 13:25:47.374412 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5lhkb" podStartSLOduration=11.020388755 podStartE2EDuration="17.37439618s" podCreationTimestamp="2025-01-30 13:25:30 +0000 UTC" firstStartedPulling="2025-01-30 13:25:31.468646098 +0000 UTC m=+6.191713202" lastFinishedPulling="2025-01-30 13:25:37.822653523 +0000 UTC m=+12.545720627" observedRunningTime="2025-01-30 13:25:42.52447374 +0000 UTC m=+17.247540884" watchObservedRunningTime="2025-01-30 13:25:47.37439618 +0000 UTC m=+22.097463284" Jan 30 13:25:47.713694 systemd-networkd[1378]: lxcae0cfdfc74c8: Gained IPv6LL Jan 30 13:25:47.713958 systemd-networkd[1378]: lxc3f99dab0400a: Gained IPv6LL Jan 30 13:25:47.969677 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jan 30 13:25:50.523385 containerd[1480]: time="2025-01-30T13:25:50.522345309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:50.523385 containerd[1480]: time="2025-01-30T13:25:50.522400911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:50.525664 containerd[1480]: time="2025-01-30T13:25:50.524660356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:50.531445 containerd[1480]: time="2025-01-30T13:25:50.526125732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:50.531445 containerd[1480]: time="2025-01-30T13:25:50.529734427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:50.531445 containerd[1480]: time="2025-01-30T13:25:50.529832471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:50.531445 containerd[1480]: time="2025-01-30T13:25:50.529865752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:50.531445 containerd[1480]: time="2025-01-30T13:25:50.530530097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:50.559848 systemd[1]: Started cri-containerd-09f3a5e6af76c6b1d5eb0e15fe98fd6c5773212555135ed2e312a0792f2f47bf.scope - libcontainer container 09f3a5e6af76c6b1d5eb0e15fe98fd6c5773212555135ed2e312a0792f2f47bf. Jan 30 13:25:50.568260 systemd[1]: run-containerd-runc-k8s.io-bfbc75be205a82dd268a775ee40dcb4deac33d0c517850586bbacd628d4a88c4-runc.rA0x3U.mount: Deactivated successfully. Jan 30 13:25:50.579871 systemd[1]: Started cri-containerd-bfbc75be205a82dd268a775ee40dcb4deac33d0c517850586bbacd628d4a88c4.scope - libcontainer container bfbc75be205a82dd268a775ee40dcb4deac33d0c517850586bbacd628d4a88c4. Jan 30 13:25:50.632030 containerd[1480]: time="2025-01-30T13:25:50.631992111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gktft,Uid:4eed96d9-26c3-4188-99de-a7d234f431f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"09f3a5e6af76c6b1d5eb0e15fe98fd6c5773212555135ed2e312a0792f2f47bf\"" Jan 30 13:25:50.638000 containerd[1480]: time="2025-01-30T13:25:50.637797210Z" level=info msg="CreateContainer within sandbox \"09f3a5e6af76c6b1d5eb0e15fe98fd6c5773212555135ed2e312a0792f2f47bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:25:50.646898 containerd[1480]: time="2025-01-30T13:25:50.646436974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rlbd,Uid:25de5240-14a4-4fcc-b8d8-891c7f9cd6e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfbc75be205a82dd268a775ee40dcb4deac33d0c517850586bbacd628d4a88c4\"" Jan 30 13:25:50.651749 containerd[1480]: time="2025-01-30T13:25:50.651673251Z" level=info msg="CreateContainer within sandbox \"bfbc75be205a82dd268a775ee40dcb4deac33d0c517850586bbacd628d4a88c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:25:50.664893 containerd[1480]: time="2025-01-30T13:25:50.664693701Z" level=info msg="CreateContainer within sandbox \"09f3a5e6af76c6b1d5eb0e15fe98fd6c5773212555135ed2e312a0792f2f47bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8b85eb4ff7d9eebb08e6deb6e4cfdac87a6dd726d248e23d941d710a24b7369\"" Jan 30 13:25:50.667933 containerd[1480]: time="2025-01-30T13:25:50.667813338Z" level=info msg="StartContainer for \"d8b85eb4ff7d9eebb08e6deb6e4cfdac87a6dd726d248e23d941d710a24b7369\"" Jan 30 13:25:50.679209 containerd[1480]: time="2025-01-30T13:25:50.677329296Z" level=info msg="CreateContainer within sandbox \"bfbc75be205a82dd268a775ee40dcb4deac33d0c517850586bbacd628d4a88c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd58702b5a749e5605c65201b9d579d6b5a33689c8b0c373a6edadfa40708a02\"" Jan 30 13:25:50.680537 containerd[1480]: time="2025-01-30T13:25:50.679541819Z" level=info msg="StartContainer for \"dd58702b5a749e5605c65201b9d579d6b5a33689c8b0c373a6edadfa40708a02\"" Jan 30 13:25:50.717619 systemd[1]: Started cri-containerd-d8b85eb4ff7d9eebb08e6deb6e4cfdac87a6dd726d248e23d941d710a24b7369.scope - libcontainer container d8b85eb4ff7d9eebb08e6deb6e4cfdac87a6dd726d248e23d941d710a24b7369. Jan 30 13:25:50.723006 systemd[1]: Started cri-containerd-dd58702b5a749e5605c65201b9d579d6b5a33689c8b0c373a6edadfa40708a02.scope - libcontainer container dd58702b5a749e5605c65201b9d579d6b5a33689c8b0c373a6edadfa40708a02. Jan 30 13:25:50.762140 containerd[1480]: time="2025-01-30T13:25:50.762088842Z" level=info msg="StartContainer for \"d8b85eb4ff7d9eebb08e6deb6e4cfdac87a6dd726d248e23d941d710a24b7369\" returns successfully" Jan 30 13:25:50.770637 containerd[1480]: time="2025-01-30T13:25:50.770593322Z" level=info msg="StartContainer for \"dd58702b5a749e5605c65201b9d579d6b5a33689c8b0c373a6edadfa40708a02\" returns successfully" Jan 30 13:25:51.576385 kubelet[2764]: I0130 13:25:51.576247 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gktft" podStartSLOduration=20.576226204 podStartE2EDuration="20.576226204s" podCreationTimestamp="2025-01-30 13:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:51.554089015 +0000 UTC m=+26.277156199" watchObservedRunningTime="2025-01-30 13:25:51.576226204 +0000 UTC m=+26.299293308" Jan 30 13:25:51.600267 kubelet[2764]: I0130 13:25:51.600154 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2rlbd" podStartSLOduration=20.600130459 podStartE2EDuration="20.600130459s" podCreationTimestamp="2025-01-30 13:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:51.598230508 +0000 UTC m=+26.321297612" watchObservedRunningTime="2025-01-30 13:25:51.600130459 +0000 UTC m=+26.323197603" Jan 30 13:30:14.806139 systemd[1]: Started sshd@7-5.75.240.180:22-139.178.68.195:55686.service - OpenSSH per-connection server daemon (139.178.68.195:55686). Jan 30 13:30:15.802038 sshd[4179]: Accepted publickey for core from 139.178.68.195 port 55686 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:15.805156 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:15.811964 systemd-logind[1463]: New session 8 of user core. Jan 30 13:30:15.825787 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:30:16.581838 sshd[4181]: Connection closed by 139.178.68.195 port 55686 Jan 30 13:30:16.582830 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:16.587056 systemd[1]: sshd@7-5.75.240.180:22-139.178.68.195:55686.service: Deactivated successfully. Jan 30 13:30:16.588771 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:30:16.589697 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:30:16.591006 systemd-logind[1463]: Removed session 8. Jan 30 13:30:21.761964 systemd[1]: Started sshd@8-5.75.240.180:22-139.178.68.195:42472.service - OpenSSH per-connection server daemon (139.178.68.195:42472). Jan 30 13:30:22.762141 sshd[4193]: Accepted publickey for core from 139.178.68.195 port 42472 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:22.764624 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:22.775378 systemd-logind[1463]: New session 9 of user core. Jan 30 13:30:22.778885 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:30:23.530700 sshd[4195]: Connection closed by 139.178.68.195 port 42472 Jan 30 13:30:23.531628 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:23.535924 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:30:23.537647 systemd[1]: sshd@8-5.75.240.180:22-139.178.68.195:42472.service: Deactivated successfully. Jan 30 13:30:23.540656 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:30:23.541979 systemd-logind[1463]: Removed session 9. Jan 30 13:30:28.705011 systemd[1]: Started sshd@9-5.75.240.180:22-139.178.68.195:52312.service - OpenSSH per-connection server daemon (139.178.68.195:52312). Jan 30 13:30:29.688011 sshd[4208]: Accepted publickey for core from 139.178.68.195 port 52312 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:29.689993 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:29.696951 systemd-logind[1463]: New session 10 of user core. Jan 30 13:30:29.700628 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:30:30.451387 sshd[4210]: Connection closed by 139.178.68.195 port 52312 Jan 30 13:30:30.450683 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:30.455968 systemd[1]: sshd@9-5.75.240.180:22-139.178.68.195:52312.service: Deactivated successfully. Jan 30 13:30:30.458321 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:30:30.460826 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:30:30.462309 systemd-logind[1463]: Removed session 10. Jan 30 13:30:30.629882 systemd[1]: Started sshd@10-5.75.240.180:22-139.178.68.195:52324.service - OpenSSH per-connection server daemon (139.178.68.195:52324). Jan 30 13:30:31.614750 sshd[4222]: Accepted publickey for core from 139.178.68.195 port 52324 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:31.617181 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:31.622164 systemd-logind[1463]: New session 11 of user core. Jan 30 13:30:31.627650 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:30:32.432779 sshd[4224]: Connection closed by 139.178.68.195 port 52324 Jan 30 13:30:32.435153 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:32.441204 systemd[1]: sshd@10-5.75.240.180:22-139.178.68.195:52324.service: Deactivated successfully. Jan 30 13:30:32.444323 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:30:32.446390 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:30:32.448863 systemd-logind[1463]: Removed session 11. Jan 30 13:30:32.607005 systemd[1]: Started sshd@11-5.75.240.180:22-139.178.68.195:52334.service - OpenSSH per-connection server daemon (139.178.68.195:52334). Jan 30 13:30:33.588156 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 52334 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:33.590537 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:33.597059 systemd-logind[1463]: New session 12 of user core. Jan 30 13:30:33.606985 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:30:34.344675 sshd[4237]: Connection closed by 139.178.68.195 port 52334 Jan 30 13:30:34.345506 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:34.350697 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:30:34.351627 systemd[1]: sshd@11-5.75.240.180:22-139.178.68.195:52334.service: Deactivated successfully. Jan 30 13:30:34.355236 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:30:34.356897 systemd-logind[1463]: Removed session 12. Jan 30 13:30:39.520574 systemd[1]: Started sshd@12-5.75.240.180:22-139.178.68.195:52068.service - OpenSSH per-connection server daemon (139.178.68.195:52068). Jan 30 13:30:40.513681 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 52068 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:40.516384 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:40.523507 systemd-logind[1463]: New session 13 of user core. Jan 30 13:30:40.526599 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:30:41.271209 sshd[4250]: Connection closed by 139.178.68.195 port 52068 Jan 30 13:30:41.270708 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:41.278918 systemd[1]: sshd@12-5.75.240.180:22-139.178.68.195:52068.service: Deactivated successfully. Jan 30 13:30:41.282683 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:30:41.285307 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:30:41.286923 systemd-logind[1463]: Removed session 13. Jan 30 13:30:41.442885 systemd[1]: Started sshd@13-5.75.240.180:22-139.178.68.195:52084.service - OpenSSH per-connection server daemon (139.178.68.195:52084). Jan 30 13:30:42.436318 sshd[4262]: Accepted publickey for core from 139.178.68.195 port 52084 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:42.437169 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:42.443228 systemd-logind[1463]: New session 14 of user core. Jan 30 13:30:42.449832 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:30:43.236496 sshd[4264]: Connection closed by 139.178.68.195 port 52084 Jan 30 13:30:43.237490 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:43.243685 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:30:43.243687 systemd[1]: sshd@13-5.75.240.180:22-139.178.68.195:52084.service: Deactivated successfully. Jan 30 13:30:43.245860 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:30:43.247105 systemd-logind[1463]: Removed session 14. Jan 30 13:30:43.405331 systemd[1]: Started sshd@14-5.75.240.180:22-139.178.68.195:52100.service - OpenSSH per-connection server daemon (139.178.68.195:52100). Jan 30 13:30:44.392290 sshd[4274]: Accepted publickey for core from 139.178.68.195 port 52100 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:44.395158 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:44.401617 systemd-logind[1463]: New session 15 of user core. Jan 30 13:30:44.410682 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:30:46.500735 sshd[4276]: Connection closed by 139.178.68.195 port 52100 Jan 30 13:30:46.500576 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:46.505811 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:30:46.507210 systemd[1]: sshd@14-5.75.240.180:22-139.178.68.195:52100.service: Deactivated successfully. Jan 30 13:30:46.509728 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:30:46.511303 systemd-logind[1463]: Removed session 15. Jan 30 13:30:46.681376 systemd[1]: Started sshd@15-5.75.240.180:22-139.178.68.195:37244.service - OpenSSH per-connection server daemon (139.178.68.195:37244). Jan 30 13:30:47.665315 sshd[4292]: Accepted publickey for core from 139.178.68.195 port 37244 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:47.667187 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:47.671880 systemd-logind[1463]: New session 16 of user core. Jan 30 13:30:47.681762 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:30:48.544494 sshd[4294]: Connection closed by 139.178.68.195 port 37244 Jan 30 13:30:48.545168 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:48.548984 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:30:48.549832 systemd[1]: sshd@15-5.75.240.180:22-139.178.68.195:37244.service: Deactivated successfully. Jan 30 13:30:48.552591 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:30:48.554361 systemd-logind[1463]: Removed session 16. Jan 30 13:30:48.716886 systemd[1]: Started sshd@16-5.75.240.180:22-139.178.68.195:37256.service - OpenSSH per-connection server daemon (139.178.68.195:37256). Jan 30 13:30:49.693575 sshd[4303]: Accepted publickey for core from 139.178.68.195 port 37256 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:49.695621 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:49.701660 systemd-logind[1463]: New session 17 of user core. Jan 30 13:30:49.706659 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:30:50.443273 sshd[4305]: Connection closed by 139.178.68.195 port 37256 Jan 30 13:30:50.444377 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:50.448854 systemd[1]: sshd@16-5.75.240.180:22-139.178.68.195:37256.service: Deactivated successfully. Jan 30 13:30:50.451101 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:30:50.454159 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:30:50.455682 systemd-logind[1463]: Removed session 17. Jan 30 13:30:55.615749 systemd[1]: Started sshd@17-5.75.240.180:22-139.178.68.195:45256.service - OpenSSH per-connection server daemon (139.178.68.195:45256). Jan 30 13:30:56.615177 sshd[4319]: Accepted publickey for core from 139.178.68.195 port 45256 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:56.617397 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:56.624676 systemd-logind[1463]: New session 18 of user core. Jan 30 13:30:56.635726 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:30:57.370913 sshd[4321]: Connection closed by 139.178.68.195 port 45256 Jan 30 13:30:57.370775 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:57.376520 systemd[1]: sshd@17-5.75.240.180:22-139.178.68.195:45256.service: Deactivated successfully. Jan 30 13:30:57.379369 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:30:57.381755 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:30:57.383481 systemd-logind[1463]: Removed session 18. Jan 30 13:31:02.555864 systemd[1]: Started sshd@18-5.75.240.180:22-139.178.68.195:45266.service - OpenSSH per-connection server daemon (139.178.68.195:45266). Jan 30 13:31:03.549590 sshd[4333]: Accepted publickey for core from 139.178.68.195 port 45266 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:03.552738 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:03.563178 systemd-logind[1463]: New session 19 of user core. Jan 30 13:31:03.567102 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:31:04.292568 sshd[4335]: Connection closed by 139.178.68.195 port 45266 Jan 30 13:31:04.293662 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:04.299354 systemd[1]: sshd@18-5.75.240.180:22-139.178.68.195:45266.service: Deactivated successfully. Jan 30 13:31:04.301936 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:31:04.303394 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:31:04.304376 systemd-logind[1463]: Removed session 19. Jan 30 13:31:04.476682 systemd[1]: Started sshd@19-5.75.240.180:22-139.178.68.195:45282.service - OpenSSH per-connection server daemon (139.178.68.195:45282). Jan 30 13:31:05.460094 sshd[4346]: Accepted publickey for core from 139.178.68.195 port 45282 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:05.462397 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:05.469902 systemd-logind[1463]: New session 20 of user core. Jan 30 13:31:05.479755 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:31:08.170083 containerd[1480]: time="2025-01-30T13:31:08.170011488Z" level=info msg="StopContainer for \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\" with timeout 30 (s)" Jan 30 13:31:08.171228 containerd[1480]: time="2025-01-30T13:31:08.170879189Z" level=info msg="Stop container \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\" with signal terminated" Jan 30 13:31:08.172562 containerd[1480]: time="2025-01-30T13:31:08.172458987Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:31:08.183664 containerd[1480]: time="2025-01-30T13:31:08.183358288Z" level=info msg="StopContainer for \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\" with timeout 2 (s)" Jan 30 13:31:08.185629 containerd[1480]: time="2025-01-30T13:31:08.185458618Z" level=info msg="Stop container \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\" with signal terminated" Jan 30 13:31:08.190994 systemd[1]: cri-containerd-d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23.scope: Deactivated successfully. Jan 30 13:31:08.195069 systemd-networkd[1378]: lxc_health: Link DOWN Jan 30 13:31:08.195079 systemd-networkd[1378]: lxc_health: Lost carrier Jan 30 13:31:08.221605 systemd[1]: cri-containerd-28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb.scope: Deactivated successfully. Jan 30 13:31:08.222279 systemd[1]: cri-containerd-28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb.scope: Consumed 7.962s CPU time. Jan 30 13:31:08.242013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23-rootfs.mount: Deactivated successfully. Jan 30 13:31:08.254525 containerd[1480]: time="2025-01-30T13:31:08.254455509Z" level=info msg="shim disconnected" id=d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23 namespace=k8s.io Jan 30 13:31:08.254525 containerd[1480]: time="2025-01-30T13:31:08.254515551Z" level=warning msg="cleaning up after shim disconnected" id=d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23 namespace=k8s.io Jan 30 13:31:08.254525 containerd[1480]: time="2025-01-30T13:31:08.254524831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:08.269856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb-rootfs.mount: Deactivated successfully. Jan 30 13:31:08.283279 containerd[1480]: time="2025-01-30T13:31:08.283206838Z" level=info msg="StopContainer for \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\" returns successfully" Jan 30 13:31:08.284224 containerd[1480]: time="2025-01-30T13:31:08.284196461Z" level=info msg="StopPodSandbox for \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\"" Jan 30 13:31:08.284507 containerd[1480]: time="2025-01-30T13:31:08.284351865Z" level=info msg="shim disconnected" id=28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb namespace=k8s.io Jan 30 13:31:08.284578 containerd[1480]: time="2025-01-30T13:31:08.284515429Z" level=warning msg="cleaning up after shim disconnected" id=28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb namespace=k8s.io Jan 30 13:31:08.284578 containerd[1480]: time="2025-01-30T13:31:08.284526629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:08.284871 containerd[1480]: time="2025-01-30T13:31:08.284359185Z" level=info msg="Container to stop \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:08.286846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf-shm.mount: Deactivated successfully. Jan 30 13:31:08.298697 systemd[1]: cri-containerd-2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf.scope: Deactivated successfully. Jan 30 13:31:08.304133 containerd[1480]: time="2025-01-30T13:31:08.304088978Z" level=info msg="StopContainer for \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\" returns successfully" Jan 30 13:31:08.304924 containerd[1480]: time="2025-01-30T13:31:08.304884877Z" level=info msg="StopPodSandbox for \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\"" Jan 30 13:31:08.305070 containerd[1480]: time="2025-01-30T13:31:08.305053161Z" level=info msg="Container to stop \"8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:08.305234 containerd[1480]: time="2025-01-30T13:31:08.305216125Z" level=info msg="Container to stop \"db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:08.305305 containerd[1480]: time="2025-01-30T13:31:08.305292086Z" level=info msg="Container to stop \"21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:08.305438 containerd[1480]: time="2025-01-30T13:31:08.305374728Z" level=info msg="Container to stop \"5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:08.305438 containerd[1480]: time="2025-01-30T13:31:08.305389889Z" level=info msg="Container to stop \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:08.311028 systemd[1]: cri-containerd-e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78.scope: Deactivated successfully. Jan 30 13:31:08.336225 containerd[1480]: time="2025-01-30T13:31:08.334991277Z" level=info msg="shim disconnected" id=2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf namespace=k8s.io Jan 30 13:31:08.336225 containerd[1480]: time="2025-01-30T13:31:08.336061383Z" level=warning msg="cleaning up after shim disconnected" id=2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf namespace=k8s.io Jan 30 13:31:08.336225 containerd[1480]: time="2025-01-30T13:31:08.336072143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:08.352471 containerd[1480]: time="2025-01-30T13:31:08.352397334Z" level=info msg="shim disconnected" id=e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78 namespace=k8s.io Jan 30 13:31:08.352471 containerd[1480]: time="2025-01-30T13:31:08.352463695Z" level=warning msg="cleaning up after shim disconnected" id=e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78 namespace=k8s.io Jan 30 13:31:08.352471 containerd[1480]: time="2025-01-30T13:31:08.352473616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:08.361155 containerd[1480]: time="2025-01-30T13:31:08.360658452Z" level=info msg="TearDown network for sandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" successfully" Jan 30 13:31:08.361155 containerd[1480]: time="2025-01-30T13:31:08.360697373Z" level=info msg="StopPodSandbox for \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" returns successfully" Jan 30 13:31:08.372332 containerd[1480]: time="2025-01-30T13:31:08.372268449Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:31:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:31:08.373922 containerd[1480]: time="2025-01-30T13:31:08.373848647Z" level=info msg="TearDown network for sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" successfully" Jan 30 13:31:08.373922 containerd[1480]: time="2025-01-30T13:31:08.373884928Z" level=info msg="StopPodSandbox for \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" returns successfully" Jan 30 13:31:08.523048 kubelet[2764]: I0130 13:31:08.522866 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kzrb\" (UniqueName: \"kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-kube-api-access-5kzrb\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523048 kubelet[2764]: I0130 13:31:08.522979 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-hubble-tls\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523048 kubelet[2764]: I0130 13:31:08.523022 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-cgroup\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523633 kubelet[2764]: I0130 13:31:08.523081 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-etc-cni-netd\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523633 kubelet[2764]: I0130 13:31:08.523121 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2hll\" (UniqueName: \"kubernetes.io/projected/4a95f356-dc60-4731-bce8-a3cb503f68e3-kube-api-access-r2hll\") pod \"4a95f356-dc60-4731-bce8-a3cb503f68e3\" (UID: \"4a95f356-dc60-4731-bce8-a3cb503f68e3\") " Jan 30 13:31:08.523633 kubelet[2764]: I0130 13:31:08.523158 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cni-path\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523633 kubelet[2764]: I0130 13:31:08.523198 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2234b10f-b80c-458e-afef-276177edd3c4-clustermesh-secrets\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523633 kubelet[2764]: I0130 13:31:08.523238 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a95f356-dc60-4731-bce8-a3cb503f68e3-cilium-config-path\") pod \"4a95f356-dc60-4731-bce8-a3cb503f68e3\" (UID: \"4a95f356-dc60-4731-bce8-a3cb503f68e3\") " Jan 30 13:31:08.523633 kubelet[2764]: I0130 13:31:08.523275 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-kernel\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523856 kubelet[2764]: I0130 13:31:08.523313 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2234b10f-b80c-458e-afef-276177edd3c4-cilium-config-path\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523856 kubelet[2764]: I0130 13:31:08.523384 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-hostproc\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523856 kubelet[2764]: I0130 13:31:08.523451 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-lib-modules\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523856 kubelet[2764]: I0130 13:31:08.523487 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-run\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523856 kubelet[2764]: I0130 13:31:08.523520 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-xtables-lock\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.523856 kubelet[2764]: I0130 13:31:08.523555 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-net\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.524118 kubelet[2764]: I0130 13:31:08.523591 2764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-bpf-maps\") pod \"2234b10f-b80c-458e-afef-276177edd3c4\" (UID: \"2234b10f-b80c-458e-afef-276177edd3c4\") " Jan 30 13:31:08.524118 kubelet[2764]: I0130 13:31:08.523703 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.530501 kubelet[2764]: I0130 13:31:08.529040 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-kube-api-access-5kzrb" (OuterVolumeSpecName: "kube-api-access-5kzrb") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "kube-api-access-5kzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:08.530501 kubelet[2764]: I0130 13:31:08.529110 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.530501 kubelet[2764]: I0130 13:31:08.529243 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a95f356-dc60-4731-bce8-a3cb503f68e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4a95f356-dc60-4731-bce8-a3cb503f68e3" (UID: "4a95f356-dc60-4731-bce8-a3cb503f68e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:31:08.531474 kubelet[2764]: I0130 13:31:08.531442 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2234b10f-b80c-458e-afef-276177edd3c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:31:08.531616 kubelet[2764]: I0130 13:31:08.531602 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.531697 kubelet[2764]: I0130 13:31:08.531685 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.531750 kubelet[2764]: I0130 13:31:08.531695 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:08.532154 kubelet[2764]: I0130 13:31:08.531718 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.532154 kubelet[2764]: I0130 13:31:08.531730 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.532154 kubelet[2764]: I0130 13:31:08.531821 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.532154 kubelet[2764]: I0130 13:31:08.531838 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.532154 kubelet[2764]: I0130 13:31:08.531853 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.532348 kubelet[2764]: I0130 13:31:08.531868 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:08.534029 kubelet[2764]: I0130 13:31:08.533977 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a95f356-dc60-4731-bce8-a3cb503f68e3-kube-api-access-r2hll" (OuterVolumeSpecName: "kube-api-access-r2hll") pod "4a95f356-dc60-4731-bce8-a3cb503f68e3" (UID: "4a95f356-dc60-4731-bce8-a3cb503f68e3"). InnerVolumeSpecName "kube-api-access-r2hll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:08.534800 kubelet[2764]: I0130 13:31:08.534772 2764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2234b10f-b80c-458e-afef-276177edd3c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2234b10f-b80c-458e-afef-276177edd3c4" (UID: "2234b10f-b80c-458e-afef-276177edd3c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:31:08.624393 kubelet[2764]: I0130 13:31:08.624346 2764 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-run\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624632 kubelet[2764]: I0130 13:31:08.624613 2764 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-xtables-lock\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624710 2764 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-hostproc\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624731 2764 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-lib-modules\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624746 2764 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-net\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624762 2764 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-bpf-maps\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624777 2764 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5kzrb\" (UniqueName: \"kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-kube-api-access-5kzrb\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624796 2764 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-etc-cni-netd\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624812 2764 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r2hll\" (UniqueName: \"kubernetes.io/projected/4a95f356-dc60-4731-bce8-a3cb503f68e3-kube-api-access-r2hll\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.624964 kubelet[2764]: I0130 13:31:08.624825 2764 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2234b10f-b80c-458e-afef-276177edd3c4-hubble-tls\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.625304 kubelet[2764]: I0130 13:31:08.624839 2764 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cilium-cgroup\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.625304 kubelet[2764]: I0130 13:31:08.624854 2764 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a95f356-dc60-4731-bce8-a3cb503f68e3-cilium-config-path\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.625304 kubelet[2764]: I0130 13:31:08.624867 2764 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-cni-path\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.625304 kubelet[2764]: I0130 13:31:08.624883 2764 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2234b10f-b80c-458e-afef-276177edd3c4-clustermesh-secrets\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.625304 kubelet[2764]: I0130 13:31:08.624915 2764 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2234b10f-b80c-458e-afef-276177edd3c4-host-proc-sys-kernel\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:08.625304 kubelet[2764]: I0130 13:31:08.624936 2764 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2234b10f-b80c-458e-afef-276177edd3c4-cilium-config-path\") on node \"ci-4186-1-0-7-1c3f91851a\" DevicePath \"\"" Jan 30 13:31:09.147288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf-rootfs.mount: Deactivated successfully. Jan 30 13:31:09.147472 systemd[1]: var-lib-kubelet-pods-4a95f356\x2ddc60\x2d4731\x2dbce8\x2da3cb503f68e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr2hll.mount: Deactivated successfully. Jan 30 13:31:09.147586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78-rootfs.mount: Deactivated successfully. Jan 30 13:31:09.147684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78-shm.mount: Deactivated successfully. Jan 30 13:31:09.147781 systemd[1]: var-lib-kubelet-pods-2234b10f\x2db80c\x2d458e\x2dafef\x2d276177edd3c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5kzrb.mount: Deactivated successfully. Jan 30 13:31:09.147875 systemd[1]: var-lib-kubelet-pods-2234b10f\x2db80c\x2d458e\x2dafef\x2d276177edd3c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:31:09.148065 systemd[1]: var-lib-kubelet-pods-2234b10f\x2db80c\x2d458e\x2dafef\x2d276177edd3c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:31:09.353333 kubelet[2764]: I0130 13:31:09.353224 2764 scope.go:117] "RemoveContainer" containerID="d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23" Jan 30 13:31:09.355444 containerd[1480]: time="2025-01-30T13:31:09.355212345Z" level=info msg="RemoveContainer for \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\"" Jan 30 13:31:09.362853 systemd[1]: Removed slice kubepods-besteffort-pod4a95f356_dc60_4731_bce8_a3cb503f68e3.slice - libcontainer container kubepods-besteffort-pod4a95f356_dc60_4731_bce8_a3cb503f68e3.slice. Jan 30 13:31:09.366666 kubelet[2764]: I0130 13:31:09.365243 2764 scope.go:117] "RemoveContainer" containerID="28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb" Jan 30 13:31:09.366713 containerd[1480]: time="2025-01-30T13:31:09.363491623Z" level=info msg="RemoveContainer for \"d1bb1a4794d4d0f14b8db6b5f67e2dd2789a23eba44097d8c3fed4c91b1f1b23\" returns successfully" Jan 30 13:31:09.366713 containerd[1480]: time="2025-01-30T13:31:09.366359652Z" level=info msg="RemoveContainer for \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\"" Jan 30 13:31:09.373321 systemd[1]: Removed slice kubepods-burstable-pod2234b10f_b80c_458e_afef_276177edd3c4.slice - libcontainer container kubepods-burstable-pod2234b10f_b80c_458e_afef_276177edd3c4.slice. Jan 30 13:31:09.373798 systemd[1]: kubepods-burstable-pod2234b10f_b80c_458e_afef_276177edd3c4.slice: Consumed 8.051s CPU time. Jan 30 13:31:09.375360 containerd[1480]: time="2025-01-30T13:31:09.375318506Z" level=info msg="RemoveContainer for \"28f6211bedfcc99ffc6ba118428855a75ee2ae06cb237f10587ce754bd41e8cb\" returns successfully" Jan 30 13:31:09.375927 kubelet[2764]: I0130 13:31:09.375839 2764 scope.go:117] "RemoveContainer" containerID="8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069" Jan 30 13:31:09.378185 containerd[1480]: time="2025-01-30T13:31:09.378018331Z" level=info msg="RemoveContainer for \"8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069\"" Jan 30 13:31:09.382064 containerd[1480]: time="2025-01-30T13:31:09.381944505Z" level=info msg="RemoveContainer for \"8510a7a99f7097b3f87610072c703121260aaa12d35a8e63371c58c5277f9069\" returns successfully" Jan 30 13:31:09.382795 kubelet[2764]: I0130 13:31:09.382473 2764 scope.go:117] "RemoveContainer" containerID="5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360" Jan 30 13:31:09.384356 containerd[1480]: time="2025-01-30T13:31:09.384323322Z" level=info msg="RemoveContainer for \"5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360\"" Jan 30 13:31:09.389694 containerd[1480]: time="2025-01-30T13:31:09.389523487Z" level=info msg="RemoveContainer for \"5060c9fc96c6139374fd30dfd4edacdcd068040c76c34af85fc7d492a4af3360\" returns successfully" Jan 30 13:31:09.393389 kubelet[2764]: I0130 13:31:09.393349 2764 scope.go:117] "RemoveContainer" containerID="21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0" Jan 30 13:31:09.400305 containerd[1480]: time="2025-01-30T13:31:09.400018858Z" level=info msg="RemoveContainer for \"21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0\"" Jan 30 13:31:09.407126 containerd[1480]: time="2025-01-30T13:31:09.406963704Z" level=info msg="RemoveContainer for \"21a970151eda46005c0777aeec0e5cf52522aa702691a5c2817bae9107c7b7f0\" returns successfully" Jan 30 13:31:09.407325 kubelet[2764]: I0130 13:31:09.407280 2764 scope.go:117] "RemoveContainer" containerID="db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40" Jan 30 13:31:09.408804 containerd[1480]: time="2025-01-30T13:31:09.408656025Z" level=info msg="RemoveContainer for \"db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40\"" Jan 30 13:31:09.414446 containerd[1480]: time="2025-01-30T13:31:09.412779764Z" level=info msg="RemoveContainer for \"db5e2b8bf3c726bda9b2e094080907cd52c311bb479ecc46a0e205f51acf0f40\" returns successfully" Jan 30 13:31:10.215045 sshd[4348]: Connection closed by 139.178.68.195 port 45282 Jan 30 13:31:10.216111 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:10.221889 systemd[1]: sshd@19-5.75.240.180:22-139.178.68.195:45282.service: Deactivated successfully. Jan 30 13:31:10.221977 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:31:10.224289 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:31:10.225799 systemd[1]: session-20.scope: Consumed 1.499s CPU time. Jan 30 13:31:10.226893 systemd-logind[1463]: Removed session 20. Jan 30 13:31:10.395715 systemd[1]: Started sshd@20-5.75.240.180:22-139.178.68.195:53418.service - OpenSSH per-connection server daemon (139.178.68.195:53418). Jan 30 13:31:10.588065 kubelet[2764]: E0130 13:31:10.587985 2764 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:31:11.384729 sshd[4512]: Accepted publickey for core from 139.178.68.195 port 53418 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:11.385406 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:11.394679 systemd-logind[1463]: New session 21 of user core. Jan 30 13:31:11.399695 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:31:11.399968 kubelet[2764]: I0130 13:31:11.398129 2764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2234b10f-b80c-458e-afef-276177edd3c4" path="/var/lib/kubelet/pods/2234b10f-b80c-458e-afef-276177edd3c4/volumes" Jan 30 13:31:11.399968 kubelet[2764]: I0130 13:31:11.399235 2764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a95f356-dc60-4731-bce8-a3cb503f68e3" path="/var/lib/kubelet/pods/4a95f356-dc60-4731-bce8-a3cb503f68e3/volumes" Jan 30 13:31:12.931680 kubelet[2764]: I0130 13:31:12.931578 2764 setters.go:600] "Node became not ready" node="ci-4186-1-0-7-1c3f91851a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:31:12Z","lastTransitionTime":"2025-01-30T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:31:13.379490 kubelet[2764]: E0130 13:31:13.377464 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2234b10f-b80c-458e-afef-276177edd3c4" containerName="clean-cilium-state" Jan 30 13:31:13.379490 kubelet[2764]: E0130 13:31:13.377496 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2234b10f-b80c-458e-afef-276177edd3c4" containerName="apply-sysctl-overwrites" Jan 30 13:31:13.379490 kubelet[2764]: E0130 13:31:13.377504 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2234b10f-b80c-458e-afef-276177edd3c4" containerName="mount-bpf-fs" Jan 30 13:31:13.379490 kubelet[2764]: E0130 13:31:13.377512 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a95f356-dc60-4731-bce8-a3cb503f68e3" containerName="cilium-operator" Jan 30 13:31:13.379490 kubelet[2764]: E0130 13:31:13.377518 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2234b10f-b80c-458e-afef-276177edd3c4" containerName="cilium-agent" Jan 30 13:31:13.379490 kubelet[2764]: E0130 13:31:13.377525 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2234b10f-b80c-458e-afef-276177edd3c4" containerName="mount-cgroup" Jan 30 13:31:13.379490 kubelet[2764]: I0130 13:31:13.377551 2764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a95f356-dc60-4731-bce8-a3cb503f68e3" containerName="cilium-operator" Jan 30 13:31:13.379490 kubelet[2764]: I0130 13:31:13.377557 2764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2234b10f-b80c-458e-afef-276177edd3c4" containerName="cilium-agent" Jan 30 13:31:13.391715 systemd[1]: Created slice kubepods-burstable-pod28bdebac_885a_42b8_a51f_6042b45b4bfc.slice - libcontainer container kubepods-burstable-pod28bdebac_885a_42b8_a51f_6042b45b4bfc.slice. Jan 30 13:31:13.456232 kubelet[2764]: I0130 13:31:13.456186 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28bdebac-885a-42b8-a51f-6042b45b4bfc-cilium-config-path\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.456232 kubelet[2764]: I0130 13:31:13.456236 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-cni-path\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.456232 kubelet[2764]: I0130 13:31:13.456256 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28bdebac-885a-42b8-a51f-6042b45b4bfc-clustermesh-secrets\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.456232 kubelet[2764]: I0130 13:31:13.456277 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-etc-cni-netd\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.456232 kubelet[2764]: I0130 13:31:13.456297 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-xtables-lock\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458756 kubelet[2764]: I0130 13:31:13.456344 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-bpf-maps\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458756 kubelet[2764]: I0130 13:31:13.456362 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-hostproc\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458756 kubelet[2764]: I0130 13:31:13.456377 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/28bdebac-885a-42b8-a51f-6042b45b4bfc-cilium-ipsec-secrets\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458756 kubelet[2764]: I0130 13:31:13.456438 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-lib-modules\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458756 kubelet[2764]: I0130 13:31:13.456456 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-host-proc-sys-net\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458756 kubelet[2764]: I0130 13:31:13.456472 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s98gj\" (UniqueName: \"kubernetes.io/projected/28bdebac-885a-42b8-a51f-6042b45b4bfc-kube-api-access-s98gj\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458977 kubelet[2764]: I0130 13:31:13.456489 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-cilium-cgroup\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458977 kubelet[2764]: I0130 13:31:13.456524 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-host-proc-sys-kernel\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458977 kubelet[2764]: I0130 13:31:13.456607 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28bdebac-885a-42b8-a51f-6042b45b4bfc-hubble-tls\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.458977 kubelet[2764]: I0130 13:31:13.456665 2764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28bdebac-885a-42b8-a51f-6042b45b4bfc-cilium-run\") pod \"cilium-d2q42\" (UID: \"28bdebac-885a-42b8-a51f-6042b45b4bfc\") " pod="kube-system/cilium-d2q42" Jan 30 13:31:13.558112 sshd[4514]: Connection closed by 139.178.68.195 port 53418 Jan 30 13:31:13.558927 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:13.590541 systemd[1]: sshd@20-5.75.240.180:22-139.178.68.195:53418.service: Deactivated successfully. Jan 30 13:31:13.594686 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:31:13.594852 systemd[1]: session-21.scope: Consumed 1.363s CPU time. Jan 30 13:31:13.596742 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:31:13.603075 systemd-logind[1463]: Removed session 21. Jan 30 13:31:13.697366 containerd[1480]: time="2025-01-30T13:31:13.697242544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2q42,Uid:28bdebac-885a-42b8-a51f-6042b45b4bfc,Namespace:kube-system,Attempt:0,}" Jan 30 13:31:13.724810 containerd[1480]: time="2025-01-30T13:31:13.724407317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:31:13.724810 containerd[1480]: time="2025-01-30T13:31:13.724517480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:31:13.724810 containerd[1480]: time="2025-01-30T13:31:13.724529640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:31:13.727798 containerd[1480]: time="2025-01-30T13:31:13.727657275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:31:13.736142 systemd[1]: Started sshd@21-5.75.240.180:22-139.178.68.195:53426.service - OpenSSH per-connection server daemon (139.178.68.195:53426). Jan 30 13:31:13.751635 systemd[1]: Started cri-containerd-b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9.scope - libcontainer container b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9. Jan 30 13:31:13.777318 containerd[1480]: time="2025-01-30T13:31:13.777277468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2q42,Uid:28bdebac-885a-42b8-a51f-6042b45b4bfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\"" Jan 30 13:31:13.781784 containerd[1480]: time="2025-01-30T13:31:13.781552491Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:31:13.793726 containerd[1480]: time="2025-01-30T13:31:13.793680262Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376\"" Jan 30 13:31:13.795535 containerd[1480]: time="2025-01-30T13:31:13.794649685Z" level=info msg="StartContainer for \"64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376\"" Jan 30 13:31:13.819137 systemd[1]: Started cri-containerd-64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376.scope - libcontainer container 64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376. Jan 30 13:31:13.847362 containerd[1480]: time="2025-01-30T13:31:13.847282470Z" level=info msg="StartContainer for \"64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376\" returns successfully" Jan 30 13:31:13.860922 systemd[1]: cri-containerd-64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376.scope: Deactivated successfully. Jan 30 13:31:13.892284 containerd[1480]: time="2025-01-30T13:31:13.892210390Z" level=info msg="shim disconnected" id=64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376 namespace=k8s.io Jan 30 13:31:13.892284 containerd[1480]: time="2025-01-30T13:31:13.892276192Z" level=warning msg="cleaning up after shim disconnected" id=64316287907146f5fc5c815f69a6755ea34a9d692235cc5ef26e42b9dcdba376 namespace=k8s.io Jan 30 13:31:13.892284 containerd[1480]: time="2025-01-30T13:31:13.892286232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:13.905472 containerd[1480]: time="2025-01-30T13:31:13.904669570Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:31:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:31:14.394046 containerd[1480]: time="2025-01-30T13:31:14.393987059Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:31:14.408202 containerd[1480]: time="2025-01-30T13:31:14.408076158Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4\"" Jan 30 13:31:14.409223 containerd[1480]: time="2025-01-30T13:31:14.409187145Z" level=info msg="StartContainer for \"f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4\"" Jan 30 13:31:14.448764 systemd[1]: Started cri-containerd-f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4.scope - libcontainer container f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4. Jan 30 13:31:14.481804 containerd[1480]: time="2025-01-30T13:31:14.481737330Z" level=info msg="StartContainer for \"f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4\" returns successfully" Jan 30 13:31:14.488149 systemd[1]: cri-containerd-f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4.scope: Deactivated successfully. Jan 30 13:31:14.512928 containerd[1480]: time="2025-01-30T13:31:14.512829798Z" level=info msg="shim disconnected" id=f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4 namespace=k8s.io Jan 30 13:31:14.512928 containerd[1480]: time="2025-01-30T13:31:14.512924360Z" level=warning msg="cleaning up after shim disconnected" id=f160478e6f750f97a38d6a8a0d842816b7b83b0c9d16eba3396297d6d0a203c4 namespace=k8s.io Jan 30 13:31:14.513185 containerd[1480]: time="2025-01-30T13:31:14.512985482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:14.730776 sshd[4546]: Accepted publickey for core from 139.178.68.195 port 53426 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:14.734129 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:14.741583 systemd-logind[1463]: New session 22 of user core. Jan 30 13:31:14.744634 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:31:15.398647 containerd[1480]: time="2025-01-30T13:31:15.398532192Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:31:15.414590 sshd[4699]: Connection closed by 139.178.68.195 port 53426 Jan 30 13:31:15.418284 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:15.420608 containerd[1480]: time="2025-01-30T13:31:15.420564843Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9\"" Jan 30 13:31:15.422920 containerd[1480]: time="2025-01-30T13:31:15.422879098Z" level=info msg="StartContainer for \"b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9\"" Jan 30 13:31:15.425658 systemd[1]: sshd@21-5.75.240.180:22-139.178.68.195:53426.service: Deactivated successfully. Jan 30 13:31:15.428425 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:31:15.431207 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:31:15.433081 systemd-logind[1463]: Removed session 22. Jan 30 13:31:15.468915 systemd[1]: Started cri-containerd-b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9.scope - libcontainer container b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9. Jan 30 13:31:15.514201 systemd[1]: cri-containerd-b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9.scope: Deactivated successfully. Jan 30 13:31:15.515513 containerd[1480]: time="2025-01-30T13:31:15.515474088Z" level=info msg="StartContainer for \"b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9\" returns successfully" Jan 30 13:31:15.551035 containerd[1480]: time="2025-01-30T13:31:15.550794298Z" level=info msg="shim disconnected" id=b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9 namespace=k8s.io Jan 30 13:31:15.551035 containerd[1480]: time="2025-01-30T13:31:15.550845779Z" level=warning msg="cleaning up after shim disconnected" id=b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9 namespace=k8s.io Jan 30 13:31:15.551035 containerd[1480]: time="2025-01-30T13:31:15.550854259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:15.567463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6533711bafa94b46f4fca78a665edbdf5568f0f66ee820f8f677d14f4f1ebc9-rootfs.mount: Deactivated successfully. Jan 30 13:31:15.589729 kubelet[2764]: E0130 13:31:15.589668 2764 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:31:15.590811 systemd[1]: Started sshd@22-5.75.240.180:22-139.178.68.195:44134.service - OpenSSH per-connection server daemon (139.178.68.195:44134). Jan 30 13:31:16.404467 containerd[1480]: time="2025-01-30T13:31:16.404272133Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:31:16.435497 containerd[1480]: time="2025-01-30T13:31:16.434152493Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7\"" Jan 30 13:31:16.437485 containerd[1480]: time="2025-01-30T13:31:16.435790973Z" level=info msg="StartContainer for \"bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7\"" Jan 30 13:31:16.469677 systemd[1]: Started cri-containerd-bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7.scope - libcontainer container bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7. Jan 30 13:31:16.502836 systemd[1]: cri-containerd-bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7.scope: Deactivated successfully. Jan 30 13:31:16.507932 containerd[1480]: time="2025-01-30T13:31:16.507402778Z" level=info msg="StartContainer for \"bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7\" returns successfully" Jan 30 13:31:16.530516 containerd[1480]: time="2025-01-30T13:31:16.530206807Z" level=info msg="shim disconnected" id=bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7 namespace=k8s.io Jan 30 13:31:16.530516 containerd[1480]: time="2025-01-30T13:31:16.530300330Z" level=warning msg="cleaning up after shim disconnected" id=bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7 namespace=k8s.io Jan 30 13:31:16.530516 containerd[1480]: time="2025-01-30T13:31:16.530313050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:16.567901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb7c5da57a927487edad6ecfb627269866b03bdc7278267dc4a51619dfde26f7-rootfs.mount: Deactivated successfully. Jan 30 13:31:16.577518 sshd[4764]: Accepted publickey for core from 139.178.68.195 port 44134 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:16.579405 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:16.585561 systemd-logind[1463]: New session 23 of user core. Jan 30 13:31:16.589629 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:31:17.410577 containerd[1480]: time="2025-01-30T13:31:17.410507705Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:31:17.434582 containerd[1480]: time="2025-01-30T13:31:17.434521564Z" level=info msg="CreateContainer within sandbox \"b9f7d19b93ff90c2ee2baae17ce519b4d92916b71565e91d96286bc1d7841ca9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"06b03e49533e04f431cd59af514b67b96f5e468628a88748767cdcc21a069583\"" Jan 30 13:31:17.435347 containerd[1480]: time="2025-01-30T13:31:17.435300463Z" level=info msg="StartContainer for \"06b03e49533e04f431cd59af514b67b96f5e468628a88748767cdcc21a069583\"" Jan 30 13:31:17.475369 systemd[1]: Started cri-containerd-06b03e49533e04f431cd59af514b67b96f5e468628a88748767cdcc21a069583.scope - libcontainer container 06b03e49533e04f431cd59af514b67b96f5e468628a88748767cdcc21a069583. Jan 30 13:31:17.513978 containerd[1480]: time="2025-01-30T13:31:17.513914478Z" level=info msg="StartContainer for \"06b03e49533e04f431cd59af514b67b96f5e468628a88748767cdcc21a069583\" returns successfully" Jan 30 13:31:17.850305 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:31:20.922690 systemd-networkd[1378]: lxc_health: Link UP Jan 30 13:31:20.958790 systemd-networkd[1378]: lxc_health: Gained carrier Jan 30 13:31:21.729486 kubelet[2764]: I0130 13:31:21.728391 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d2q42" podStartSLOduration=8.728372204 podStartE2EDuration="8.728372204s" podCreationTimestamp="2025-01-30 13:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:31:18.44097092 +0000 UTC m=+353.164038024" watchObservedRunningTime="2025-01-30 13:31:21.728372204 +0000 UTC m=+356.451439308" Jan 30 13:31:23.009834 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jan 30 13:31:25.427368 containerd[1480]: time="2025-01-30T13:31:25.427195814Z" level=info msg="StopPodSandbox for \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\"" Jan 30 13:31:25.427368 containerd[1480]: time="2025-01-30T13:31:25.427297176Z" level=info msg="TearDown network for sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" successfully" Jan 30 13:31:25.427368 containerd[1480]: time="2025-01-30T13:31:25.427308096Z" level=info msg="StopPodSandbox for \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" returns successfully" Jan 30 13:31:25.430627 containerd[1480]: time="2025-01-30T13:31:25.428948202Z" level=info msg="RemovePodSandbox for \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\"" Jan 30 13:31:25.430627 containerd[1480]: time="2025-01-30T13:31:25.428998763Z" level=info msg="Forcibly stopping sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\"" Jan 30 13:31:25.430627 containerd[1480]: time="2025-01-30T13:31:25.429109565Z" level=info msg="TearDown network for sandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" successfully" Jan 30 13:31:25.436344 containerd[1480]: time="2025-01-30T13:31:25.436292840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:31:25.436713 containerd[1480]: time="2025-01-30T13:31:25.436567965Z" level=info msg="RemovePodSandbox \"e65a68fcfe7e579d8f938022aed729e52c1fbb3dc01ccb190e521918416caf78\" returns successfully" Jan 30 13:31:25.437391 containerd[1480]: time="2025-01-30T13:31:25.437358937Z" level=info msg="StopPodSandbox for \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\"" Jan 30 13:31:25.437909 containerd[1480]: time="2025-01-30T13:31:25.437707303Z" level=info msg="TearDown network for sandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" successfully" Jan 30 13:31:25.437909 containerd[1480]: time="2025-01-30T13:31:25.437728103Z" level=info msg="StopPodSandbox for \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" returns successfully" Jan 30 13:31:25.439821 containerd[1480]: time="2025-01-30T13:31:25.438348793Z" level=info msg="RemovePodSandbox for \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\"" Jan 30 13:31:25.439821 containerd[1480]: time="2025-01-30T13:31:25.438377074Z" level=info msg="Forcibly stopping sandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\"" Jan 30 13:31:25.439821 containerd[1480]: time="2025-01-30T13:31:25.438478955Z" level=info msg="TearDown network for sandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" successfully" Jan 30 13:31:25.444407 containerd[1480]: time="2025-01-30T13:31:25.444189007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:31:25.444407 containerd[1480]: time="2025-01-30T13:31:25.444290529Z" level=info msg="RemovePodSandbox \"2e8c8d3c69081040fb49b921727c36c205ce8645de1b522575e794156646b0bf\" returns successfully" Jan 30 13:31:28.196775 sshd[4821]: Connection closed by 139.178.68.195 port 44134 Jan 30 13:31:28.195737 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:28.200826 systemd[1]: sshd@22-5.75.240.180:22-139.178.68.195:44134.service: Deactivated successfully. Jan 30 13:31:28.202958 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:31:28.205181 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:31:28.206674 systemd-logind[1463]: Removed session 23. Jan 30 13:31:42.731229 systemd[1]: cri-containerd-ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4.scope: Deactivated successfully. Jan 30 13:31:42.731994 systemd[1]: cri-containerd-ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4.scope: Consumed 6.106s CPU time, 20.1M memory peak, 0B memory swap peak. Jan 30 13:31:42.755776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4-rootfs.mount: Deactivated successfully. Jan 30 13:31:42.760755 containerd[1480]: time="2025-01-30T13:31:42.760698078Z" level=info msg="shim disconnected" id=ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4 namespace=k8s.io Jan 30 13:31:42.761406 containerd[1480]: time="2025-01-30T13:31:42.761296489Z" level=warning msg="cleaning up after shim disconnected" id=ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4 namespace=k8s.io Jan 30 13:31:42.761406 containerd[1480]: time="2025-01-30T13:31:42.761315249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:43.173593 kubelet[2764]: E0130 13:31:43.173499 2764 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41888->10.0.0.2:2379: read: connection timed out" Jan 30 13:31:43.479728 kubelet[2764]: I0130 13:31:43.479123 2764 scope.go:117] "RemoveContainer" containerID="ed69a0b700ff10548ff456234830187a1e8bcb35fd025c5bad363a56442c5db4" Jan 30 13:31:43.483225 containerd[1480]: time="2025-01-30T13:31:43.483078600Z" level=info msg="CreateContainer within sandbox \"63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 13:31:43.502162 containerd[1480]: time="2025-01-30T13:31:43.502084445Z" level=info msg="CreateContainer within sandbox \"63f028996750f794732c15a1e08fd138310db41de68d73a2843aab1021457115\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"458a5821bfa50334463cf4347ee5ff1634d0d17c6f2b9eacb4aa52485a15cfde\"" Jan 30 13:31:43.502922 containerd[1480]: time="2025-01-30T13:31:43.502901379Z" level=info msg="StartContainer for \"458a5821bfa50334463cf4347ee5ff1634d0d17c6f2b9eacb4aa52485a15cfde\"" Jan 30 13:31:43.530663 systemd[1]: Started cri-containerd-458a5821bfa50334463cf4347ee5ff1634d0d17c6f2b9eacb4aa52485a15cfde.scope - libcontainer container 458a5821bfa50334463cf4347ee5ff1634d0d17c6f2b9eacb4aa52485a15cfde. Jan 30 13:31:43.573349 containerd[1480]: time="2025-01-30T13:31:43.573210224Z" level=info msg="StartContainer for \"458a5821bfa50334463cf4347ee5ff1634d0d17c6f2b9eacb4aa52485a15cfde\" returns successfully" Jan 30 13:31:47.077170 kubelet[2764]: I0130 13:31:47.075835 2764 status_manager.go:851] "Failed to get status for pod" podUID="91cdcc4982a1538f03f6f54cd7fac606" pod="kube-system/kube-controller-manager-ci-4186-1-0-7-1c3f91851a" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41810->10.0.0.2:2379: read: connection timed out" Jan 30 13:31:47.844729 kubelet[2764]: E0130 13:31:47.844353 2764 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41724->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4186-1-0-7-1c3f91851a.181f7b9d75068b71 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4186-1-0-7-1c3f91851a,UID:b6d55f625e59a1a53b13ebe868fe7070,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-7-1c3f91851a,},FirstTimestamp:2025-01-30 13:31:37.391745905 +0000 UTC m=+372.114813049,LastTimestamp:2025-01-30 13:31:37.391745905 +0000 UTC m=+372.114813049,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-7-1c3f91851a,}" Jan 30 13:31:49.323745 systemd[1]: cri-containerd-7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621.scope: Deactivated successfully. Jan 30 13:31:49.325397 systemd[1]: cri-containerd-7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621.scope: Consumed 3.557s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 30 13:31:49.350533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621-rootfs.mount: Deactivated successfully. Jan 30 13:31:49.357254 containerd[1480]: time="2025-01-30T13:31:49.357187565Z" level=info msg="shim disconnected" id=7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621 namespace=k8s.io Jan 30 13:31:49.357254 containerd[1480]: time="2025-01-30T13:31:49.357246166Z" level=warning msg="cleaning up after shim disconnected" id=7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621 namespace=k8s.io Jan 30 13:31:49.357254 containerd[1480]: time="2025-01-30T13:31:49.357260206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:49.496884 kubelet[2764]: I0130 13:31:49.496638 2764 scope.go:117] "RemoveContainer" containerID="7de705a672116d5a54119d799139df313e7a49c05b1ac041200d69b23dde2621" Jan 30 13:31:49.498492 containerd[1480]: time="2025-01-30T13:31:49.498443750Z" level=info msg="CreateContainer within sandbox \"5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 13:31:49.520703 containerd[1480]: time="2025-01-30T13:31:49.520544736Z" level=info msg="CreateContainer within sandbox \"5ec35efcbd298b5e2afe1a493503c0380cfc1466b8f36a732d8b01cf9987242f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f6434b2493297f0bc7a419eb3afe84cfcf0da4dbab92378899d9693d40ebbc38\"" Jan 30 13:31:49.522047 containerd[1480]: time="2025-01-30T13:31:49.521105666Z" level=info msg="StartContainer for \"f6434b2493297f0bc7a419eb3afe84cfcf0da4dbab92378899d9693d40ebbc38\"" Jan 30 13:31:49.559059 systemd[1]: Started cri-containerd-f6434b2493297f0bc7a419eb3afe84cfcf0da4dbab92378899d9693d40ebbc38.scope - libcontainer container f6434b2493297f0bc7a419eb3afe84cfcf0da4dbab92378899d9693d40ebbc38. Jan 30 13:31:49.600874 containerd[1480]: time="2025-01-30T13:31:49.600765217Z" level=info msg="StartContainer for \"f6434b2493297f0bc7a419eb3afe84cfcf0da4dbab92378899d9693d40ebbc38\" returns successfully"