Dec 13 14:07:44.918473 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:07:44.918497 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 14:07:44.918507 kernel: KASLR enabled Dec 13 14:07:44.918513 kernel: efi: EFI v2.7 by EDK II Dec 13 14:07:44.918519 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 14:07:44.918525 kernel: random: crng init done Dec 13 14:07:44.918532 kernel: ACPI: Early table checksum verification disabled Dec 13 14:07:44.918538 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 14:07:44.918544 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:07:44.918550 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918558 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918564 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918570 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918576 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918584 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918592 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918598 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918604 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:07:44.918611 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 14:07:44.918617 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 14:07:44.918624 kernel: NUMA: Failed to initialise from firmware Dec 13 14:07:44.918631 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 14:07:44.918637 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 14:07:44.918643 kernel: Zone ranges: Dec 13 14:07:44.918649 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:07:44.918656 kernel: DMA32 empty Dec 13 14:07:44.918663 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 14:07:44.918669 kernel: Movable zone start for each node Dec 13 14:07:44.918676 kernel: Early memory node ranges Dec 13 14:07:44.918682 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 14:07:44.918688 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 14:07:44.918694 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 14:07:44.918701 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 14:07:44.918707 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 14:07:44.918713 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 14:07:44.918720 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 14:07:44.918726 kernel: psci: probing for conduit method from ACPI. Dec 13 14:07:44.918734 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:07:44.918740 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:07:44.918747 kernel: psci: Trusted OS migration not required Dec 13 14:07:44.918756 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:07:44.918763 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:07:44.918770 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 14:07:44.918778 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 14:07:44.918784 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:07:44.918791 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:07:44.918798 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:07:44.918805 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:07:44.918811 kernel: CPU features: detected: Spectre-v4 Dec 13 14:07:44.918818 kernel: CPU features: detected: Spectre-BHB Dec 13 14:07:44.918825 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:07:44.918831 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:07:44.918838 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:07:44.918845 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:07:44.918853 kernel: alternatives: applying boot alternatives Dec 13 14:07:44.919880 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 14:07:44.919891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:07:44.919898 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:07:44.919906 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:07:44.919913 kernel: Fallback order for Node 0: 0 Dec 13 14:07:44.919920 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 14:07:44.919926 kernel: Policy zone: Normal Dec 13 14:07:44.919933 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:07:44.919940 kernel: software IO TLB: area num 2. Dec 13 14:07:44.919947 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 14:07:44.919959 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 14:07:44.919966 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:07:44.919973 kernel: trace event string verifier disabled Dec 13 14:07:44.919979 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:07:44.919987 kernel: rcu: RCU event tracing is enabled. Dec 13 14:07:44.919994 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:07:44.920001 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:07:44.920007 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:07:44.920014 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:07:44.920021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:07:44.920028 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:07:44.920036 kernel: GICv3: 256 SPIs implemented Dec 13 14:07:44.920043 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:07:44.920050 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:07:44.920056 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 14:07:44.920063 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:07:44.920070 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:07:44.920076 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:07:44.920083 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:07:44.920090 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 14:07:44.920097 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 14:07:44.920104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 14:07:44.920113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:07:44.920119 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:07:44.920126 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:07:44.920133 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:07:44.920140 kernel: Console: colour dummy device 80x25 Dec 13 14:07:44.920147 kernel: ACPI: Core revision 20230628 Dec 13 14:07:44.920154 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:07:44.920161 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:07:44.920169 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 14:07:44.920176 kernel: landlock: Up and running. Dec 13 14:07:44.920184 kernel: SELinux: Initializing. Dec 13 14:07:44.920191 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:07:44.920198 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:07:44.920205 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 14:07:44.920212 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 14:07:44.920219 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:07:44.920226 kernel: rcu: Max phase no-delay instances is 400. Dec 13 14:07:44.920233 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:07:44.920240 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:07:44.920248 kernel: Remapping and enabling EFI services. Dec 13 14:07:44.920255 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:07:44.920262 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:07:44.920270 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:07:44.920277 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 14:07:44.920283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:07:44.920291 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:07:44.920299 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:07:44.920306 kernel: SMP: Total of 2 processors activated. Dec 13 14:07:44.920313 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:07:44.920321 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:07:44.920328 kernel: CPU features: detected: Common not Private translations Dec 13 14:07:44.920341 kernel: CPU features: detected: CRC32 instructions Dec 13 14:07:44.920349 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 14:07:44.920357 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:07:44.920364 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:07:44.920371 kernel: CPU features: detected: Privileged Access Never Dec 13 14:07:44.920379 kernel: CPU features: detected: RAS Extension Support Dec 13 14:07:44.920386 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:07:44.920395 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:07:44.920403 kernel: alternatives: applying system-wide alternatives Dec 13 14:07:44.920410 kernel: devtmpfs: initialized Dec 13 14:07:44.920417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:07:44.920425 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:07:44.920432 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:07:44.920440 kernel: SMBIOS 3.0.0 present. Dec 13 14:07:44.920449 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 14:07:44.920456 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:07:44.920463 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:07:44.920471 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:07:44.920478 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:07:44.920486 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:07:44.920493 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 13 14:07:44.920500 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:07:44.920508 kernel: cpuidle: using governor menu Dec 13 14:07:44.920517 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:07:44.920524 kernel: ASID allocator initialised with 32768 entries Dec 13 14:07:44.920531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:07:44.920539 kernel: Serial: AMBA PL011 UART driver Dec 13 14:07:44.920546 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 14:07:44.920553 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 14:07:44.920561 kernel: Modules: 509040 pages in range for PLT usage Dec 13 14:07:44.920569 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:07:44.920576 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 14:07:44.920585 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:07:44.920592 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 14:07:44.920600 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:07:44.920607 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 14:07:44.920614 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:07:44.920622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 14:07:44.920629 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:07:44.920636 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:07:44.920644 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:07:44.920653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:07:44.920660 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:07:44.920667 kernel: ACPI: Interpreter enabled Dec 13 14:07:44.920674 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:07:44.920682 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:07:44.920689 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:07:44.920697 kernel: printk: console [ttyAMA0] enabled Dec 13 14:07:44.920704 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:07:44.920872 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:07:44.920964 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:07:44.921678 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:07:44.921758 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:07:44.921823 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:07:44.921833 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:07:44.921841 kernel: PCI host bridge to bus 0000:00 Dec 13 14:07:44.921938 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:07:44.922973 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:07:44.923049 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:07:44.923109 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:07:44.923192 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:07:44.923270 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 14:07:44.923338 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 14:07:44.923420 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 14:07:44.923496 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.923564 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 14:07:44.923641 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.923708 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 14:07:44.923779 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.923849 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 14:07:44.925067 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.925141 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 14:07:44.925213 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.925278 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 14:07:44.925352 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.925423 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 14:07:44.925495 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.925562 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 14:07:44.925632 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.925697 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 14:07:44.925768 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:07:44.925833 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 14:07:44.928229 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 14:07:44.928313 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 14:07:44.928390 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 14:07:44.928459 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 14:07:44.928527 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:07:44.928595 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 14:07:44.928677 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 14:07:44.928745 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 14:07:44.928820 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 14:07:44.928919 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 14:07:44.928994 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 14:07:44.929071 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 14:07:44.929140 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 14:07:44.929221 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 14:07:44.929291 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 14:07:44.929368 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 14:07:44.929437 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 14:07:44.929505 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 14:07:44.929580 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 14:07:44.929651 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 14:07:44.929720 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 14:07:44.929788 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 14:07:44.930992 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:07:44.931123 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 14:07:44.931190 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 14:07:44.931267 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:07:44.931333 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:07:44.931396 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 14:07:44.931464 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:07:44.931529 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:07:44.931591 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:07:44.931658 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:07:44.931722 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 14:07:44.931792 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:07:44.932013 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 14:07:44.932096 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 14:07:44.932159 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 14:07:44.932472 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 14:07:44.932547 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 14:07:44.932612 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 14:07:44.932687 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 14:07:44.932752 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 14:07:44.932817 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 14:07:44.933951 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 14:07:44.934031 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 14:07:44.934095 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 14:07:44.934165 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 14:07:44.934288 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 14:07:44.934366 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 14:07:44.934432 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 14:07:44.934497 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 14:07:44.934563 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 14:07:44.934628 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 14:07:44.934692 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 14:07:44.934755 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 14:07:44.934822 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 14:07:44.935705 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 14:07:44.935786 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 14:07:44.935851 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 14:07:44.935954 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 14:07:44.936020 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 14:07:44.936092 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 14:07:44.936156 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 14:07:44.936222 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 14:07:44.936287 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 14:07:44.936351 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 14:07:44.936415 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 14:07:44.936485 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 14:07:44.936551 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 14:07:44.936615 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 14:07:44.936678 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 14:07:44.936743 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 14:07:44.936807 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 14:07:44.937660 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 14:07:44.937754 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 14:07:44.937821 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 14:07:44.937917 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 14:07:44.937985 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 14:07:44.938050 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 14:07:44.938115 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 14:07:44.938178 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 14:07:44.938263 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 14:07:44.938329 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 14:07:44.938393 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 14:07:44.938463 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 14:07:44.938530 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 14:07:44.938594 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 14:07:44.938663 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 14:07:44.938737 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 14:07:44.938804 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:07:44.938921 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 14:07:44.938997 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 14:07:44.939068 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 14:07:44.939132 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 14:07:44.939195 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 14:07:44.939266 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 14:07:44.939332 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 14:07:44.939399 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 14:07:44.939463 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 14:07:44.939527 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 14:07:44.939598 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 14:07:44.939664 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 14:07:44.939729 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 14:07:44.939794 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 14:07:44.939901 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 14:07:44.939971 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 14:07:44.940042 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 14:07:44.940106 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 14:07:44.940170 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 14:07:44.940232 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 14:07:44.940294 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 14:07:44.940369 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 14:07:44.940439 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 14:07:44.940504 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 14:07:44.940567 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 14:07:44.940649 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 14:07:44.940724 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 14:07:44.940791 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 14:07:44.940856 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 14:07:44.940952 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 14:07:44.941022 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 14:07:44.941087 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 14:07:44.941159 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 14:07:44.941228 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 14:07:44.941297 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 14:07:44.941362 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 14:07:44.941429 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 14:07:44.941496 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 14:07:44.941564 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 14:07:44.941629 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 14:07:44.941695 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 14:07:44.941761 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 14:07:44.941827 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 14:07:44.942819 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 14:07:44.942935 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 14:07:44.943010 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 14:07:44.943090 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 14:07:44.943163 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:07:44.943224 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:07:44.943286 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:07:44.943389 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 14:07:44.943479 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 14:07:44.943552 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 14:07:44.943629 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 14:07:44.943693 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 14:07:44.943757 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 14:07:44.943826 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 14:07:44.944311 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 14:07:44.944402 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 14:07:44.944484 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 14:07:44.944549 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 14:07:44.944615 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 14:07:44.944695 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 14:07:44.944762 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 14:07:44.944827 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 14:07:44.944947 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 14:07:44.945026 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 14:07:44.945093 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 14:07:44.945165 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 14:07:44.945231 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 14:07:44.945331 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 14:07:44.945434 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 14:07:44.945511 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 14:07:44.945577 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 14:07:44.945649 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 14:07:44.945714 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 14:07:44.945783 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 14:07:44.945796 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:07:44.945805 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:07:44.945814 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:07:44.945823 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:07:44.945831 kernel: iommu: Default domain type: Translated Dec 13 14:07:44.945839 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:07:44.945848 kernel: efivars: Registered efivars operations Dec 13 14:07:44.945856 kernel: vgaarb: loaded Dec 13 14:07:44.945880 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:07:44.945891 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:07:44.945899 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:07:44.945907 kernel: pnp: PnP ACPI init Dec 13 14:07:44.948090 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:07:44.948119 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:07:44.948128 kernel: NET: Registered PF_INET protocol family Dec 13 14:07:44.948137 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:07:44.948146 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:07:44.948162 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:07:44.948170 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:07:44.948179 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 14:07:44.948187 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:07:44.948196 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:07:44.948204 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:07:44.948213 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:07:44.948300 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 14:07:44.948313 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:07:44.948324 kernel: kvm [1]: HYP mode not available Dec 13 14:07:44.948333 kernel: Initialise system trusted keyrings Dec 13 14:07:44.948341 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:07:44.948350 kernel: Key type asymmetric registered Dec 13 14:07:44.948358 kernel: Asymmetric key parser 'x509' registered Dec 13 14:07:44.948366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 14:07:44.948374 kernel: io scheduler mq-deadline registered Dec 13 14:07:44.948382 kernel: io scheduler kyber registered Dec 13 14:07:44.948391 kernel: io scheduler bfq registered Dec 13 14:07:44.948401 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:07:44.948475 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 14:07:44.948546 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 14:07:44.948616 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.948687 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 14:07:44.948756 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 14:07:44.948828 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.950045 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 14:07:44.950141 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 14:07:44.950236 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.950313 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 14:07:44.950397 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 14:07:44.950475 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.950550 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 14:07:44.950629 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 14:07:44.950700 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.950773 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 14:07:44.950842 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 14:07:44.950934 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.951010 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 14:07:44.951080 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 14:07:44.951149 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.951223 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 14:07:44.951294 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 14:07:44.951368 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.951379 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 14:07:44.951452 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 14:07:44.951524 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 14:07:44.951593 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:07:44.951605 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:07:44.951613 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:07:44.951624 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:07:44.951702 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 14:07:44.951780 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 14:07:44.951868 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 14:07:44.951881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:07:44.951890 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:07:44.951981 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 14:07:44.951994 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 14:07:44.952004 kernel: thunder_xcv, ver 1.0 Dec 13 14:07:44.952018 kernel: thunder_bgx, ver 1.0 Dec 13 14:07:44.952028 kernel: nicpf, ver 1.0 Dec 13 14:07:44.952036 kernel: nicvf, ver 1.0 Dec 13 14:07:44.952141 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:07:44.952222 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:07:44 UTC (1734098864) Dec 13 14:07:44.952233 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:07:44.952241 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:07:44.952250 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 14:07:44.952260 kernel: watchdog: Hard watchdog permanently disabled Dec 13 14:07:44.952268 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:07:44.952277 kernel: Segment Routing with IPv6 Dec 13 14:07:44.952285 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:07:44.952293 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:07:44.952301 kernel: Key type dns_resolver registered Dec 13 14:07:44.952309 kernel: registered taskstats version 1 Dec 13 14:07:44.952318 kernel: Loading compiled-in X.509 certificates Dec 13 14:07:44.952326 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 14:07:44.952336 kernel: Key type .fscrypt registered Dec 13 14:07:44.952344 kernel: Key type fscrypt-provisioning registered Dec 13 14:07:44.952353 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:07:44.952361 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:07:44.952369 kernel: ima: No architecture policies found Dec 13 14:07:44.952378 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:07:44.952386 kernel: clk: Disabling unused clocks Dec 13 14:07:44.952394 kernel: Freeing unused kernel memory: 39360K Dec 13 14:07:44.952402 kernel: Run /init as init process Dec 13 14:07:44.952412 kernel: with arguments: Dec 13 14:07:44.952420 kernel: /init Dec 13 14:07:44.952428 kernel: with environment: Dec 13 14:07:44.952436 kernel: HOME=/ Dec 13 14:07:44.952444 kernel: TERM=linux Dec 13 14:07:44.952452 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:07:44.952463 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:07:44.952473 systemd[1]: Detected virtualization kvm. Dec 13 14:07:44.952484 systemd[1]: Detected architecture arm64. Dec 13 14:07:44.952493 systemd[1]: Running in initrd. Dec 13 14:07:44.952501 systemd[1]: No hostname configured, using default hostname. Dec 13 14:07:44.952510 systemd[1]: Hostname set to . Dec 13 14:07:44.952519 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:07:44.952528 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:07:44.952537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:07:44.952546 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:07:44.952557 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 14:07:44.952566 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:07:44.952575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 14:07:44.952588 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 14:07:44.952600 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 14:07:44.952611 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 14:07:44.952624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:07:44.952635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:07:44.952645 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:07:44.952656 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:07:44.952666 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:07:44.952677 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:07:44.952688 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:07:44.952698 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:07:44.952708 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 14:07:44.952721 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 14:07:44.952732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:07:44.952741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:07:44.952752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:07:44.952763 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:07:44.952775 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 14:07:44.952786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:07:44.952796 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 14:07:44.952808 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:07:44.952818 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:07:44.952827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:07:44.952835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:07:44.952845 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 14:07:44.952854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:07:44.952873 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:07:44.952885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:07:44.952916 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 14:07:44.952941 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:07:44.952951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:07:44.952961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:44.952970 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:07:44.952980 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:07:44.952989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:07:44.952999 kernel: Bridge firewalling registered Dec 13 14:07:44.953008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:07:44.953018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:07:44.953028 systemd-journald[237]: Journal started Dec 13 14:07:44.953049 systemd-journald[237]: Runtime Journal (/run/log/journal/0bec383efd974b8eb12b647caca2af05) is 8.0M, max 76.5M, 68.5M free. Dec 13 14:07:44.955586 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:07:44.911606 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 14:07:44.936942 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 14:07:44.957488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:07:44.963523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:07:44.965053 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:07:44.978249 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 14:07:44.981282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:07:44.993347 dracut-cmdline[271]: dracut-dracut-053 Dec 13 14:07:44.994722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:07:44.996026 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 14:07:45.026843 systemd-resolved[277]: Positive Trust Anchors: Dec 13 14:07:45.026875 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:07:45.026909 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:07:45.032398 systemd-resolved[277]: Defaulting to hostname 'linux'. Dec 13 14:07:45.034519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:07:45.036627 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:07:45.089881 kernel: SCSI subsystem initialized Dec 13 14:07:45.093886 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:07:45.100881 kernel: iscsi: registered transport (tcp) Dec 13 14:07:45.113894 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:07:45.113943 kernel: QLogic iSCSI HBA Driver Dec 13 14:07:45.158066 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 14:07:45.164035 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 14:07:45.183901 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:07:45.183971 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:07:45.184892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 14:07:45.244920 kernel: raid6: neonx8 gen() 15774 MB/s Dec 13 14:07:45.261900 kernel: raid6: neonx4 gen() 15640 MB/s Dec 13 14:07:45.278904 kernel: raid6: neonx2 gen() 13275 MB/s Dec 13 14:07:45.295895 kernel: raid6: neonx1 gen() 10500 MB/s Dec 13 14:07:45.312906 kernel: raid6: int64x8 gen() 6955 MB/s Dec 13 14:07:45.329895 kernel: raid6: int64x4 gen() 7346 MB/s Dec 13 14:07:45.346924 kernel: raid6: int64x2 gen() 6139 MB/s Dec 13 14:07:45.363900 kernel: raid6: int64x1 gen() 5061 MB/s Dec 13 14:07:45.364017 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s Dec 13 14:07:45.380917 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Dec 13 14:07:45.380961 kernel: raid6: using neon recovery algorithm Dec 13 14:07:45.386068 kernel: xor: measuring software checksum speed Dec 13 14:07:45.386130 kernel: 8regs : 19783 MB/sec Dec 13 14:07:45.386158 kernel: 32regs : 19674 MB/sec Dec 13 14:07:45.386206 kernel: arm64_neon : 27150 MB/sec Dec 13 14:07:45.386238 kernel: xor: using function: arm64_neon (27150 MB/sec) Dec 13 14:07:45.439013 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 14:07:45.456437 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:07:45.463061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:07:45.489366 systemd-udevd[456]: Using default interface naming scheme 'v255'. Dec 13 14:07:45.493006 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:07:45.504722 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 14:07:45.520547 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Dec 13 14:07:45.554889 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:07:45.564159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:07:45.619091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:07:45.627186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 14:07:45.654724 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 14:07:45.656533 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:07:45.658446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:07:45.659963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:07:45.665034 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 14:07:45.684181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:07:45.720082 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:07:45.777282 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:07:45.777403 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 14:07:45.783422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:07:45.783760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:07:45.785106 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:07:45.786560 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:07:45.786697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:45.795460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:07:45.804198 kernel: ACPI: bus type USB registered Dec 13 14:07:45.804286 kernel: usbcore: registered new interface driver usbfs Dec 13 14:07:45.804330 kernel: usbcore: registered new interface driver hub Dec 13 14:07:45.804369 kernel: usbcore: registered new device driver usb Dec 13 14:07:45.806277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:07:45.828405 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:45.838420 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:07:45.841919 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 14:07:45.857746 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:07:45.857902 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 14:07:45.858003 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 14:07:45.858085 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:07:45.858171 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:07:45.858276 kernel: hub 1-0:1.0: USB hub found Dec 13 14:07:45.858382 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:07:45.858467 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:07:45.858568 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 14:07:45.866156 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 14:07:45.866312 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 14:07:45.866415 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 14:07:45.866503 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 14:07:45.866587 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:07:45.866608 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 14:07:45.866702 kernel: hub 2-0:1.0: USB hub found Dec 13 14:07:45.866809 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:07:45.867260 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:07:45.867378 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:07:45.867470 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:07:45.867482 kernel: GPT:17805311 != 80003071 Dec 13 14:07:45.867492 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:07:45.867507 kernel: GPT:17805311 != 80003071 Dec 13 14:07:45.867517 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:07:45.867527 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:07:45.867537 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 14:07:45.861235 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:07:45.915198 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (503) Dec 13 14:07:45.916889 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (507) Dec 13 14:07:45.925984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 14:07:45.931453 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 14:07:45.936588 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 14:07:45.937358 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 14:07:45.943884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 14:07:45.950058 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 14:07:45.954887 disk-uuid[572]: Primary Header is updated. Dec 13 14:07:45.954887 disk-uuid[572]: Secondary Entries is updated. Dec 13 14:07:45.954887 disk-uuid[572]: Secondary Header is updated. Dec 13 14:07:45.957931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:07:46.092893 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 14:07:46.334895 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 14:07:46.475993 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 14:07:46.476049 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 14:07:46.477028 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 14:07:46.532015 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 14:07:46.532355 kernel: usbcore: registered new interface driver usbhid Dec 13 14:07:46.532380 kernel: usbhid: USB HID core driver Dec 13 14:07:46.976938 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:07:46.980163 disk-uuid[574]: The operation has completed successfully. Dec 13 14:07:47.028833 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:07:47.029549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 14:07:47.041031 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 14:07:47.048340 sh[591]: Success Dec 13 14:07:47.066170 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:07:47.119578 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 14:07:47.132657 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 14:07:47.134142 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 14:07:47.161951 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 14:07:47.162030 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:07:47.162048 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 14:07:47.162948 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 14:07:47.163013 kernel: BTRFS info (device dm-0): using free space tree Dec 13 14:07:47.168898 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 14:07:47.170595 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 14:07:47.171982 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 14:07:47.177025 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 14:07:47.180072 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 14:07:47.189222 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 14:07:47.189258 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:07:47.189269 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:07:47.191889 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:07:47.191922 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 14:07:47.200206 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:07:47.201915 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 14:07:47.208437 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 14:07:47.213102 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 14:07:47.294243 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:07:47.300272 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:07:47.335695 systemd-networkd[778]: lo: Link UP Dec 13 14:07:47.336039 systemd-networkd[778]: lo: Gained carrier Dec 13 14:07:47.338196 systemd-networkd[778]: Enumeration completed Dec 13 14:07:47.338299 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:07:47.339177 systemd[1]: Reached target network.target - Network. Dec 13 14:07:47.340522 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:47.340526 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:07:47.343142 ignition[670]: Ignition 2.19.0 Dec 13 14:07:47.343152 ignition[670]: Stage: fetch-offline Dec 13 14:07:47.343675 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:47.343188 ignition[670]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:47.343678 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:07:47.343197 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:47.344214 systemd-networkd[778]: eth0: Link UP Dec 13 14:07:47.343367 ignition[670]: parsed url from cmdline: "" Dec 13 14:07:47.344217 systemd-networkd[778]: eth0: Gained carrier Dec 13 14:07:47.343370 ignition[670]: no config URL provided Dec 13 14:07:47.344223 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:47.343375 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:07:47.346896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:07:47.343382 ignition[670]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:07:47.349156 systemd-networkd[778]: eth1: Link UP Dec 13 14:07:47.343387 ignition[670]: failed to fetch config: resource requires networking Dec 13 14:07:47.349160 systemd-networkd[778]: eth1: Gained carrier Dec 13 14:07:47.343566 ignition[670]: Ignition finished successfully Dec 13 14:07:47.349167 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:47.353825 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 14:07:47.368066 ignition[782]: Ignition 2.19.0 Dec 13 14:07:47.368081 ignition[782]: Stage: fetch Dec 13 14:07:47.368304 ignition[782]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:47.368314 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:47.368422 ignition[782]: parsed url from cmdline: "" Dec 13 14:07:47.368425 ignition[782]: no config URL provided Dec 13 14:07:47.368429 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:07:47.368443 ignition[782]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:07:47.368462 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 14:07:47.369333 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 14:07:47.383970 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:07:47.483962 systemd-networkd[778]: eth0: DHCPv4 address 168.119.51.76/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 14:07:47.569663 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 14:07:47.576687 ignition[782]: GET result: OK Dec 13 14:07:47.576882 ignition[782]: parsing config with SHA512: fd376d03869e7430a8c09553242e1ef3f814473f45303bfdf3994133a74ac8e3ae76025f02054e92b4fe433ab12c21fed4747002c7cfd52bdf29b306c5204c3a Dec 13 14:07:47.583031 unknown[782]: fetched base config from "system" Dec 13 14:07:47.583043 unknown[782]: fetched base config from "system" Dec 13 14:07:47.583598 ignition[782]: fetch: fetch complete Dec 13 14:07:47.583050 unknown[782]: fetched user config from "hetzner" Dec 13 14:07:47.583605 ignition[782]: fetch: fetch passed Dec 13 14:07:47.586606 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 14:07:47.583652 ignition[782]: Ignition finished successfully Dec 13 14:07:47.595118 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 14:07:47.608632 ignition[789]: Ignition 2.19.0 Dec 13 14:07:47.608643 ignition[789]: Stage: kargs Dec 13 14:07:47.608825 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:47.608835 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:47.609780 ignition[789]: kargs: kargs passed Dec 13 14:07:47.609824 ignition[789]: Ignition finished successfully Dec 13 14:07:47.612110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 14:07:47.616187 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 14:07:47.630754 ignition[796]: Ignition 2.19.0 Dec 13 14:07:47.630763 ignition[796]: Stage: disks Dec 13 14:07:47.630952 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:47.630962 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:47.631877 ignition[796]: disks: disks passed Dec 13 14:07:47.631923 ignition[796]: Ignition finished successfully Dec 13 14:07:47.634764 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 14:07:47.635948 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 14:07:47.637252 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 14:07:47.638374 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:07:47.639354 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:07:47.640385 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:07:47.648110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 14:07:47.666653 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 14:07:47.671996 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 14:07:47.680993 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 14:07:47.733893 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 14:07:47.735136 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 14:07:47.736042 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 14:07:47.748020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:07:47.750646 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 14:07:47.755042 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 14:07:47.757331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:07:47.757364 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:07:47.763912 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Dec 13 14:07:47.763938 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 14:07:47.763949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:07:47.763959 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:07:47.763967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 14:07:47.768905 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:07:47.768952 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 14:07:47.770012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 14:07:47.771527 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:07:47.836346 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:07:47.846596 coreos-metadata[814]: Dec 13 14:07:47.846 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 14:07:47.849300 coreos-metadata[814]: Dec 13 14:07:47.848 INFO Fetch successful Dec 13 14:07:47.849300 coreos-metadata[814]: Dec 13 14:07:47.849 INFO wrote hostname ci-4081-2-1-a-7dfc9bce8d to /sysroot/etc/hostname Dec 13 14:07:47.853277 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:07:47.854825 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:07:47.855072 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:07:47.861264 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:07:47.964965 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 14:07:47.971068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 14:07:47.977062 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 14:07:47.984882 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 14:07:48.011315 ignition[930]: INFO : Ignition 2.19.0 Dec 13 14:07:48.011315 ignition[930]: INFO : Stage: mount Dec 13 14:07:48.012838 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:48.012838 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:48.013208 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 14:07:48.014731 ignition[930]: INFO : mount: mount passed Dec 13 14:07:48.014731 ignition[930]: INFO : Ignition finished successfully Dec 13 14:07:48.015321 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 14:07:48.018969 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 14:07:48.162912 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 14:07:48.177209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:07:48.186903 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Dec 13 14:07:48.189099 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 14:07:48.189151 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:07:48.189176 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:07:48.191889 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:07:48.191933 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 14:07:48.195239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:07:48.219318 ignition[958]: INFO : Ignition 2.19.0 Dec 13 14:07:48.219318 ignition[958]: INFO : Stage: files Dec 13 14:07:48.220421 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:48.220421 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:48.222908 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:07:48.222908 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:07:48.222908 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:07:48.227586 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:07:48.227586 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:07:48.227586 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:07:48.227586 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:07:48.227586 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:07:48.225126 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 14:07:48.308734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:07:48.553129 systemd-networkd[778]: eth1: Gained IPv6LL Dec 13 14:07:48.873152 systemd-networkd[778]: eth0: Gained IPv6LL Dec 13 14:07:49.208591 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:07:49.208591 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:07:49.210697 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:07:49.763737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:07:49.926930 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:07:49.929087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 14:07:50.473183 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:07:50.779788 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:07:50.779788 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:07:50.784973 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:07:50.784973 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:07:50.784973 ignition[958]: INFO : files: files passed Dec 13 14:07:50.784973 ignition[958]: INFO : Ignition finished successfully Dec 13 14:07:50.784245 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 14:07:50.791056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 14:07:50.796606 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 14:07:50.800784 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:07:50.801446 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 14:07:50.809525 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:07:50.809525 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:07:50.811586 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:07:50.814211 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:07:50.816846 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 14:07:50.822108 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 14:07:50.846784 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:07:50.847486 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 14:07:50.849337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 14:07:50.850694 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 14:07:50.851499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 14:07:50.856037 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 14:07:50.870985 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:07:50.878045 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 14:07:50.888148 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:07:50.889494 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:07:50.890783 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 14:07:50.891372 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:07:50.891494 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:07:50.893099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 14:07:50.893633 systemd[1]: Stopped target basic.target - Basic System. Dec 13 14:07:50.894919 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 14:07:50.896211 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:07:50.897341 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 14:07:50.898602 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 14:07:50.899693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:07:50.900896 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 14:07:50.901959 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 14:07:50.903150 systemd[1]: Stopped target swap.target - Swaps. Dec 13 14:07:50.903936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:07:50.904059 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:07:50.905234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:07:50.906297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:07:50.907263 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 14:07:50.908811 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:07:50.910058 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:07:50.910221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 14:07:50.911603 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:07:50.911720 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:07:50.912776 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:07:50.912876 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 14:07:50.913763 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:07:50.913855 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:07:50.924135 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 14:07:50.927115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 14:07:50.927735 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:07:50.927911 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:07:50.928870 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:07:50.928994 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:07:50.938760 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:07:50.944115 ignition[1010]: INFO : Ignition 2.19.0 Dec 13 14:07:50.944115 ignition[1010]: INFO : Stage: umount Dec 13 14:07:50.946680 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:07:50.946680 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:07:50.945024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 14:07:50.948828 ignition[1010]: INFO : umount: umount passed Dec 13 14:07:50.948828 ignition[1010]: INFO : Ignition finished successfully Dec 13 14:07:50.951451 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:07:50.951550 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 14:07:50.952818 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:07:50.956305 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 14:07:50.958527 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:07:50.958576 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 14:07:50.959428 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:07:50.959464 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 14:07:50.960898 systemd[1]: Stopped target network.target - Network. Dec 13 14:07:50.962396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:07:50.962446 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:07:50.963943 systemd[1]: Stopped target paths.target - Path Units. Dec 13 14:07:50.965366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:07:50.968912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:07:50.969485 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 14:07:50.970646 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 14:07:50.971532 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:07:50.971581 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:07:50.972436 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:07:50.972475 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:07:50.973474 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:07:50.973566 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 14:07:50.975032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 14:07:50.975106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 14:07:50.977290 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 14:07:50.979225 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 14:07:50.982674 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:07:50.983300 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:07:50.983400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 14:07:50.984986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:07:50.985075 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 14:07:50.986941 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 13 14:07:50.990938 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:07:50.991009 systemd-networkd[778]: eth1: DHCPv6 lease lost Dec 13 14:07:50.991064 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 14:07:50.993625 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:07:50.993770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 14:07:50.995625 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:07:50.995675 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:07:51.002969 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 14:07:51.003411 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:07:51.003471 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:07:51.004143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:07:51.004184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:07:51.004823 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:07:51.004872 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 14:07:51.005756 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 14:07:51.005788 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:07:51.006781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:07:51.020521 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:07:51.021621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:07:51.023190 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:07:51.023247 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 14:07:51.024252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:07:51.024284 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:07:51.025096 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:07:51.025139 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:07:51.026697 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:07:51.026746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 14:07:51.028339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:07:51.028387 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:07:51.035253 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 14:07:51.035798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:07:51.035847 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:07:51.036473 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 14:07:51.036509 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:07:51.037129 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:07:51.037166 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:07:51.038250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:07:51.038582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:51.041269 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:07:51.041364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 14:07:51.045282 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:07:51.045395 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 14:07:51.046791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 14:07:51.053256 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 14:07:51.059990 systemd[1]: Switching root. Dec 13 14:07:51.097979 systemd-journald[237]: Journal stopped Dec 13 14:07:51.981042 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 14:07:51.981134 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:07:51.981151 kernel: SELinux: policy capability open_perms=1 Dec 13 14:07:51.981161 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:07:51.981171 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:07:51.981181 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:07:51.981191 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:07:51.981205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:07:51.981216 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:07:51.981225 kernel: audit: type=1403 audit(1734098871.260:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:07:51.981238 systemd[1]: Successfully loaded SELinux policy in 32.780ms. Dec 13 14:07:51.981262 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.974ms. Dec 13 14:07:51.981274 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:07:51.981285 systemd[1]: Detected virtualization kvm. Dec 13 14:07:51.981296 systemd[1]: Detected architecture arm64. Dec 13 14:07:51.981306 systemd[1]: Detected first boot. Dec 13 14:07:51.981320 systemd[1]: Hostname set to . Dec 13 14:07:51.981331 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:07:51.981342 zram_generator::config[1052]: No configuration found. Dec 13 14:07:51.981355 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:07:51.981366 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:07:51.981376 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 14:07:51.981387 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:07:51.981399 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 14:07:51.981410 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 14:07:51.981421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 14:07:51.981432 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 14:07:51.981444 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 14:07:51.981456 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 14:07:51.981467 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 14:07:51.981478 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 14:07:51.981489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:07:51.981501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:07:51.981512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 14:07:51.981522 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 14:07:51.981534 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 14:07:51.981546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:07:51.981558 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 14:07:51.981570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:07:51.981580 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 14:07:51.981592 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 14:07:51.981602 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 14:07:51.981615 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 14:07:51.981626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:07:51.981637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:07:51.981648 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:07:51.981659 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:07:51.981670 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 14:07:51.981681 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 14:07:51.981693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:07:51.981704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:07:51.981716 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:07:51.981727 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 14:07:51.981738 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 14:07:51.981749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 14:07:51.981760 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 14:07:51.981771 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 14:07:51.981785 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 14:07:51.981797 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 14:07:51.981808 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:07:51.981823 systemd[1]: Reached target machines.target - Containers. Dec 13 14:07:51.981835 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 14:07:51.981847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:51.983892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:07:51.983929 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 14:07:51.983947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:07:51.983958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:07:51.983969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:07:51.983981 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 14:07:51.983992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:07:51.984003 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:07:51.984014 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:07:51.984025 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 14:07:51.984036 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:07:51.984049 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:07:51.984061 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:07:51.984072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:07:51.984083 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 14:07:51.984095 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 14:07:51.984107 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:07:51.984120 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:07:51.984131 systemd[1]: Stopped verity-setup.service. Dec 13 14:07:51.984147 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 14:07:51.984161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 14:07:51.984172 kernel: fuse: init (API version 7.39) Dec 13 14:07:51.984183 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 14:07:51.984194 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 14:07:51.984206 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 14:07:51.984217 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 14:07:51.984229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:07:51.984240 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:07:51.984251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 14:07:51.984293 systemd-journald[1126]: Collecting audit messages is disabled. Dec 13 14:07:51.984320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:07:51.984332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:07:51.984345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:07:51.984356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:07:51.984370 systemd-journald[1126]: Journal started Dec 13 14:07:51.984393 systemd-journald[1126]: Runtime Journal (/run/log/journal/0bec383efd974b8eb12b647caca2af05) is 8.0M, max 76.5M, 68.5M free. Dec 13 14:07:51.721565 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:07:51.743703 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 14:07:51.986891 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:07:51.744222 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:07:51.987256 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:07:51.987397 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 14:07:52.001019 kernel: loop: module loaded Dec 13 14:07:52.003879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:07:52.004831 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:07:52.005073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:07:52.005834 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 14:07:52.006935 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 14:07:52.007758 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 14:07:52.012495 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 14:07:52.013961 kernel: ACPI: bus type drm_connector registered Dec 13 14:07:52.021969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 14:07:52.024998 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 14:07:52.027970 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:07:52.028014 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:07:52.029546 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 14:07:52.036504 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 14:07:52.043487 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 14:07:52.044439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:52.051095 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 14:07:52.060313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 14:07:52.061216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:07:52.063103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 14:07:52.065484 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:07:52.072088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:07:52.080171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 14:07:52.084071 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:07:52.089462 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:07:52.089688 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:07:52.090678 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 14:07:52.091845 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 14:07:52.094913 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 14:07:52.100371 systemd-journald[1126]: Time spent on flushing to /var/log/journal/0bec383efd974b8eb12b647caca2af05 is 96.598ms for 1129 entries. Dec 13 14:07:52.100371 systemd-journald[1126]: System Journal (/var/log/journal/0bec383efd974b8eb12b647caca2af05) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:07:52.228716 systemd-journald[1126]: Received client request to flush runtime journal. Dec 13 14:07:52.228781 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 14:07:52.228814 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:07:52.228833 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 14:07:52.130101 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 14:07:52.131758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:07:52.134348 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 14:07:52.147984 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 14:07:52.149917 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 14:07:52.173269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:07:52.177641 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:07:52.178342 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 14:07:52.205231 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Dec 13 14:07:52.205242 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Dec 13 14:07:52.208842 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:07:52.215907 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:07:52.224073 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 14:07:52.232614 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 14:07:52.278103 kernel: loop2: detected capacity change from 0 to 8 Dec 13 14:07:52.297490 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 14:07:52.298493 kernel: loop3: detected capacity change from 0 to 194096 Dec 13 14:07:52.306371 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:07:52.335238 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Dec 13 14:07:52.335253 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Dec 13 14:07:52.340359 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:07:52.347954 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 14:07:52.368017 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 14:07:52.381900 kernel: loop6: detected capacity change from 0 to 8 Dec 13 14:07:52.385893 kernel: loop7: detected capacity change from 0 to 194096 Dec 13 14:07:52.409231 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 14:07:52.409661 (sd-merge)[1194]: Merged extensions into '/usr'. Dec 13 14:07:52.421789 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 14:07:52.422024 systemd[1]: Reloading... Dec 13 14:07:52.529880 zram_generator::config[1216]: No configuration found. Dec 13 14:07:52.533102 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:07:52.675552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:52.722498 systemd[1]: Reloading finished in 300 ms. Dec 13 14:07:52.749261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 14:07:52.752058 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 14:07:52.762177 systemd[1]: Starting ensure-sysext.service... Dec 13 14:07:52.766328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:07:52.767282 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 14:07:52.775421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:07:52.783482 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Dec 13 14:07:52.783500 systemd[1]: Reloading... Dec 13 14:07:52.786642 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:07:52.787222 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 14:07:52.788290 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:07:52.788634 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 14:07:52.788746 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 14:07:52.791580 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:07:52.791679 systemd-tmpfiles[1258]: Skipping /boot Dec 13 14:07:52.807132 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:07:52.807145 systemd-tmpfiles[1258]: Skipping /boot Dec 13 14:07:52.814390 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Dec 13 14:07:52.891887 zram_generator::config[1287]: No configuration found. Dec 13 14:07:52.965951 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1307) Dec 13 14:07:52.982883 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1307) Dec 13 14:07:53.031541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:53.089247 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 14:07:53.089429 systemd[1]: Reloading finished in 305 ms. Dec 13 14:07:53.100524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:07:53.101457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:07:53.122928 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:07:53.132162 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 14:07:53.145472 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 14:07:53.151063 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 14:07:53.152526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:53.164395 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:07:53.165977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:07:53.175029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:07:53.175958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:53.187073 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 14:07:53.194055 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:07:53.197361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:07:53.217005 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 14:07:53.218248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:07:53.221943 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:07:53.223491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:07:53.223785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:07:53.228950 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:07:53.229995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:07:53.245233 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 14:07:53.246876 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 14:07:53.246941 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 14:07:53.246960 kernel: [drm] features: -context_init Dec 13 14:07:53.247892 kernel: [drm] number of scanouts: 1 Dec 13 14:07:53.247954 kernel: [drm] number of cap sets: 0 Dec 13 14:07:53.262887 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 14:07:53.278789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:53.283560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:07:53.288079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:07:53.295260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:07:53.297537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:53.304234 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1297) Dec 13 14:07:53.304322 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 14:07:53.303495 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 14:07:53.319881 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 14:07:53.325681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:07:53.326927 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:07:53.329144 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 14:07:53.331540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:07:53.331778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:07:53.336152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:07:53.336302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:07:53.349611 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:07:53.349796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:07:53.365795 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 14:07:53.369263 systemd[1]: Finished ensure-sysext.service. Dec 13 14:07:53.372969 augenrules[1406]: No rules Dec 13 14:07:53.379452 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 14:07:53.384637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 14:07:53.386647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:53.394669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:07:53.399274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:07:53.404067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:07:53.405270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:53.408140 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 14:07:53.419212 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 14:07:53.423602 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 14:07:53.426495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:07:53.426855 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 14:07:53.427785 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 14:07:53.428956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:07:53.429108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:07:53.431042 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:07:53.431207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:07:53.432490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:07:53.432628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:07:53.436487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:07:53.438978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:53.452156 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 14:07:53.453108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:07:53.453310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:07:53.458074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:07:53.486975 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:07:53.493695 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 14:07:53.505921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 14:07:53.513302 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 14:07:53.514618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:07:53.525063 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 14:07:53.540419 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:07:53.562037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:53.578297 systemd-networkd[1376]: lo: Link UP Dec 13 14:07:53.578310 systemd-networkd[1376]: lo: Gained carrier Dec 13 14:07:53.579832 systemd-networkd[1376]: Enumeration completed Dec 13 14:07:53.580053 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:07:53.580944 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 14:07:53.583152 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:53.583161 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:07:53.584705 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:53.584714 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:07:53.585270 systemd-networkd[1376]: eth0: Link UP Dec 13 14:07:53.585279 systemd-networkd[1376]: eth0: Gained carrier Dec 13 14:07:53.585294 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:53.589145 systemd-networkd[1376]: eth1: Link UP Dec 13 14:07:53.589159 systemd-networkd[1376]: eth1: Gained carrier Dec 13 14:07:53.589573 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:53.591065 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 14:07:53.591855 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 14:07:53.593154 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 14:07:53.599249 systemd-resolved[1377]: Positive Trust Anchors: Dec 13 14:07:53.599276 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:07:53.599310 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:07:53.603530 systemd-resolved[1377]: Using system hostname 'ci-4081-2-1-a-7dfc9bce8d'. Dec 13 14:07:53.605324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:07:53.606091 systemd[1]: Reached target network.target - Network. Dec 13 14:07:53.606634 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:07:53.607368 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:07:53.608090 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 14:07:53.608793 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 14:07:53.609769 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 14:07:53.610583 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 14:07:53.611386 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 14:07:53.612103 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:07:53.612140 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:07:53.612632 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:07:53.614758 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 14:07:53.616938 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 14:07:53.623139 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 14:07:53.624411 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 14:07:53.625142 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:07:53.625653 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:07:53.625948 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:07:53.626302 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:07:53.626337 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:07:53.627633 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Dec 13 14:07:53.628000 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 14:07:53.631111 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 14:07:53.636093 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 14:07:53.638723 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 14:07:53.648057 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 14:07:53.648557 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 14:07:53.655353 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 14:07:53.657988 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 14:07:53.666065 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 14:07:53.668228 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 14:07:53.670925 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 14:07:53.672843 jq[1454]: false Dec 13 14:07:53.673546 coreos-metadata[1452]: Dec 13 14:07:53.673 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 14:07:53.673546 coreos-metadata[1452]: Dec 13 14:07:53.673 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Dec 13 14:07:53.677170 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 14:07:53.688781 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:07:53.689320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:07:53.696234 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 14:07:53.699437 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 14:07:53.710890 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:07:53.711092 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 14:07:53.743735 extend-filesystems[1455]: Found loop4 Dec 13 14:07:53.747448 systemd-networkd[1376]: eth0: DHCPv4 address 168.119.51.76/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 14:07:53.745588 dbus-daemon[1453]: [system] SELinux support is enabled Dec 13 14:07:53.758024 extend-filesystems[1455]: Found loop5 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found loop6 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found loop7 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda1 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda2 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda3 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found usr Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda4 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda6 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda7 Dec 13 14:07:53.758024 extend-filesystems[1455]: Found sda9 Dec 13 14:07:53.758024 extend-filesystems[1455]: Checking size of /dev/sda9 Dec 13 14:07:53.747811 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Dec 13 14:07:53.804516 extend-filesystems[1455]: Resized partition /dev/sda9 Dec 13 14:07:53.807176 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 14:07:53.749029 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Dec 13 14:07:53.807335 jq[1468]: true Dec 13 14:07:53.807518 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Dec 13 14:07:53.751151 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 14:07:53.756218 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:07:53.813789 tar[1471]: linux-arm64/helm Dec 13 14:07:53.757910 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 14:07:53.814854 jq[1485]: true Dec 13 14:07:53.762756 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:07:53.817917 update_engine[1466]: I20241213 14:07:53.811759 1466 main.cc:92] Flatcar Update Engine starting Dec 13 14:07:53.763976 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 14:07:53.775939 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:07:53.775990 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 14:07:53.786746 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:07:53.786765 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 14:07:53.813220 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 14:07:53.821952 systemd[1]: Started update-engine.service - Update Engine. Dec 13 14:07:53.824234 update_engine[1466]: I20241213 14:07:53.823035 1466 update_check_scheduler.cc:74] Next update check in 7m30s Dec 13 14:07:53.830052 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 14:07:53.888453 systemd-logind[1463]: New seat seat0. Dec 13 14:07:53.898501 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1290) Dec 13 14:07:53.912580 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:07:53.912597 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 14:07:53.913011 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 14:07:53.955953 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 14:07:53.988808 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:07:53.988808 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 14:07:53.988808 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 14:07:53.995912 extend-filesystems[1455]: Resized filesystem in /dev/sda9 Dec 13 14:07:53.995912 extend-filesystems[1455]: Found sr0 Dec 13 14:07:53.990465 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:07:53.990638 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 14:07:54.023930 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:07:54.022911 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 14:07:54.033198 systemd[1]: Starting sshkeys.service... Dec 13 14:07:54.058003 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 14:07:54.068168 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 14:07:54.092811 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:07:54.105979 coreos-metadata[1527]: Dec 13 14:07:54.104 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 14:07:54.108864 coreos-metadata[1527]: Dec 13 14:07:54.107 INFO Fetch successful Dec 13 14:07:54.109669 unknown[1527]: wrote ssh authorized keys file for user: core Dec 13 14:07:54.146990 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:07:54.148477 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 14:07:54.154232 systemd[1]: Finished sshkeys.service. Dec 13 14:07:54.176139 containerd[1486]: time="2024-12-13T14:07:54.176035560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 14:07:54.221545 containerd[1486]: time="2024-12-13T14:07:54.221486360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.223412 containerd[1486]: time="2024-12-13T14:07:54.223114120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:54.223412 containerd[1486]: time="2024-12-13T14:07:54.223201720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:07:54.223412 containerd[1486]: time="2024-12-13T14:07:54.223218040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:07:54.223732 containerd[1486]: time="2024-12-13T14:07:54.223632720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 14:07:54.223732 containerd[1486]: time="2024-12-13T14:07:54.223659440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.223959 containerd[1486]: time="2024-12-13T14:07:54.223936400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:54.224022 containerd[1486]: time="2024-12-13T14:07:54.224009080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.224391 containerd[1486]: time="2024-12-13T14:07:54.224367360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:54.224450 containerd[1486]: time="2024-12-13T14:07:54.224438440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.224504 containerd[1486]: time="2024-12-13T14:07:54.224490760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:54.224603 containerd[1486]: time="2024-12-13T14:07:54.224588280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.224808 containerd[1486]: time="2024-12-13T14:07:54.224735640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.225370 containerd[1486]: time="2024-12-13T14:07:54.225104720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:54.225370 containerd[1486]: time="2024-12-13T14:07:54.225219480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:54.225370 containerd[1486]: time="2024-12-13T14:07:54.225233920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:07:54.225370 containerd[1486]: time="2024-12-13T14:07:54.225306080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:07:54.225370 containerd[1486]: time="2024-12-13T14:07:54.225341520Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:07:54.229454 containerd[1486]: time="2024-12-13T14:07:54.229420080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:07:54.229573 containerd[1486]: time="2024-12-13T14:07:54.229560560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:07:54.229681 containerd[1486]: time="2024-12-13T14:07:54.229667720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 14:07:54.229766 containerd[1486]: time="2024-12-13T14:07:54.229754000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 14:07:54.229882 containerd[1486]: time="2024-12-13T14:07:54.229819320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:07:54.230092 containerd[1486]: time="2024-12-13T14:07:54.230039400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:07:54.230662 containerd[1486]: time="2024-12-13T14:07:54.230521920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:07:54.230872 containerd[1486]: time="2024-12-13T14:07:54.230783240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 14:07:54.231010 containerd[1486]: time="2024-12-13T14:07:54.230919400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 14:07:54.231010 containerd[1486]: time="2024-12-13T14:07:54.230953920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 14:07:54.231010 containerd[1486]: time="2024-12-13T14:07:54.230970280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231010 containerd[1486]: time="2024-12-13T14:07:54.230983760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231191 containerd[1486]: time="2024-12-13T14:07:54.230996840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231191 containerd[1486]: time="2024-12-13T14:07:54.231138880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231191 containerd[1486]: time="2024-12-13T14:07:54.231158040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231191 containerd[1486]: time="2024-12-13T14:07:54.231173880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231361 containerd[1486]: time="2024-12-13T14:07:54.231292480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231361 containerd[1486]: time="2024-12-13T14:07:54.231309800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:07:54.231361 containerd[1486]: time="2024-12-13T14:07:54.231331560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231361 containerd[1486]: time="2024-12-13T14:07:54.231344920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231472640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231493920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231506920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231525480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231548240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231562280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231577160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231594440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.231619 containerd[1486]: time="2024-12-13T14:07:54.231606400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.232006 containerd[1486]: time="2024-12-13T14:07:54.231813560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.232006 containerd[1486]: time="2024-12-13T14:07:54.231836280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.232006 containerd[1486]: time="2024-12-13T14:07:54.231853680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 14:07:54.232006 containerd[1486]: time="2024-12-13T14:07:54.231893680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.232006 containerd[1486]: time="2024-12-13T14:07:54.231906640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.232006 containerd[1486]: time="2024-12-13T14:07:54.231922320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:07:54.232338 containerd[1486]: time="2024-12-13T14:07:54.232178040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:07:54.232470 containerd[1486]: time="2024-12-13T14:07:54.232201480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 14:07:54.232470 containerd[1486]: time="2024-12-13T14:07:54.232407160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:07:54.232470 containerd[1486]: time="2024-12-13T14:07:54.232423240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 14:07:54.232470 containerd[1486]: time="2024-12-13T14:07:54.232432840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.232470 containerd[1486]: time="2024-12-13T14:07:54.232445160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 14:07:54.232470 containerd[1486]: time="2024-12-13T14:07:54.232454520Z" level=info msg="NRI interface is disabled by configuration." Dec 13 14:07:54.232749 containerd[1486]: time="2024-12-13T14:07:54.232612440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:07:54.233202 containerd[1486]: time="2024-12-13T14:07:54.233130120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:07:54.233428 containerd[1486]: time="2024-12-13T14:07:54.233259800Z" level=info msg="Connect containerd service" Dec 13 14:07:54.233688 containerd[1486]: time="2024-12-13T14:07:54.233302400Z" level=info msg="using legacy CRI server" Dec 13 14:07:54.233688 containerd[1486]: time="2024-12-13T14:07:54.233547480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 14:07:54.233688 containerd[1486]: time="2024-12-13T14:07:54.233668600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:07:54.234812 containerd[1486]: time="2024-12-13T14:07:54.234770200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:07:54.235430 containerd[1486]: time="2024-12-13T14:07:54.234958640Z" level=info msg="Start subscribing containerd event" Dec 13 14:07:54.235430 containerd[1486]: time="2024-12-13T14:07:54.235015080Z" level=info msg="Start recovering state" Dec 13 14:07:54.235430 containerd[1486]: time="2024-12-13T14:07:54.235076520Z" level=info msg="Start event monitor" Dec 13 14:07:54.235430 containerd[1486]: time="2024-12-13T14:07:54.235087120Z" level=info msg="Start snapshots syncer" Dec 13 14:07:54.235430 containerd[1486]: time="2024-12-13T14:07:54.235095600Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:07:54.235430 containerd[1486]: time="2024-12-13T14:07:54.235102440Z" level=info msg="Start streaming server" Dec 13 14:07:54.236085 containerd[1486]: time="2024-12-13T14:07:54.236055360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:07:54.236264 containerd[1486]: time="2024-12-13T14:07:54.236197520Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:07:54.236439 containerd[1486]: time="2024-12-13T14:07:54.236424320Z" level=info msg="containerd successfully booted in 0.061566s" Dec 13 14:07:54.236515 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 14:07:54.467183 tar[1471]: linux-arm64/LICENSE Dec 13 14:07:54.467271 tar[1471]: linux-arm64/README.md Dec 13 14:07:54.478938 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 14:07:54.579740 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:07:54.606445 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 14:07:54.617179 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 14:07:54.627017 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:07:54.627450 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 14:07:54.636170 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 14:07:54.647630 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 14:07:54.657216 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 14:07:54.659832 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 14:07:54.661673 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 14:07:54.673880 coreos-metadata[1452]: Dec 13 14:07:54.673 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Dec 13 14:07:54.674989 coreos-metadata[1452]: Dec 13 14:07:54.674 INFO Fetch successful Dec 13 14:07:54.675326 coreos-metadata[1452]: Dec 13 14:07:54.675 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 14:07:54.675691 coreos-metadata[1452]: Dec 13 14:07:54.675 INFO Fetch successful Dec 13 14:07:54.726621 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 14:07:54.728443 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 14:07:54.761057 systemd-networkd[1376]: eth0: Gained IPv6LL Dec 13 14:07:54.762002 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Dec 13 14:07:54.766390 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 14:07:54.769193 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 14:07:54.782221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:54.784957 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 14:07:54.817357 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 14:07:55.402113 systemd-networkd[1376]: eth1: Gained IPv6LL Dec 13 14:07:55.402939 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Dec 13 14:07:55.411057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:55.412424 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 14:07:55.414076 systemd[1]: Startup finished in 749ms (kernel) + 6.565s (initrd) + 4.185s (userspace) = 11.501s. Dec 13 14:07:55.426424 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:07:55.996523 kubelet[1583]: E1213 14:07:55.996466 1583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:55.998667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:55.998931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:06.249710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:08:06.257177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:08:06.374615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:08:06.389248 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:08:06.442804 kubelet[1603]: E1213 14:08:06.442740 1603 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:06.445930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:06.446114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:16.696741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:08:16.712208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:08:16.814655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:08:16.827326 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:08:16.877359 kubelet[1620]: E1213 14:08:16.877303 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:16.880467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:16.880771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:25.820926 systemd-timesyncd[1423]: Contacted time server 2.56.247.37:123 (2.flatcar.pool.ntp.org). Dec 13 14:08:25.821034 systemd-timesyncd[1423]: Initial clock synchronization to Fri 2024-12-13 14:08:25.509933 UTC. Dec 13 14:08:27.131425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:08:27.138092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:08:27.247203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:08:27.257295 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:08:27.302941 kubelet[1636]: E1213 14:08:27.302853 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:27.307977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:27.308249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:37.443256 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:08:37.451193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:08:37.549144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:08:37.559213 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:08:37.601169 kubelet[1653]: E1213 14:08:37.601106 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:37.603814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:37.604008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:38.957381 update_engine[1466]: I20241213 14:08:38.957226 1466 update_attempter.cc:509] Updating boot flags... Dec 13 14:08:39.003913 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1669) Dec 13 14:08:39.057962 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1669) Dec 13 14:08:39.113875 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1669) Dec 13 14:08:47.693157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:08:47.700119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:08:47.812908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:08:47.825262 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:08:47.869451 kubelet[1689]: E1213 14:08:47.869390 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:47.872148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:47.872321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:57.943586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:08:57.957333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:08:58.077134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:08:58.083990 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:08:58.130654 kubelet[1704]: E1213 14:08:58.130607 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:58.133357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:58.133509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:08.193078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:09:08.199086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:09:08.320912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:09:08.339371 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:09:08.387345 kubelet[1721]: E1213 14:09:08.387285 1721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:08.390050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:08.390222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:18.443296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 14:09:18.456249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:09:18.566277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:09:18.580225 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:09:18.623551 kubelet[1737]: E1213 14:09:18.623492 1737 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:18.626308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:18.626500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:28.693556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 14:09:28.702126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:09:28.821250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:09:28.830253 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:09:28.875618 kubelet[1753]: E1213 14:09:28.875551 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:28.879072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:28.879388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:38.943231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 14:09:38.952204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:09:39.086452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:09:39.099302 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:09:39.143279 kubelet[1768]: E1213 14:09:39.143217 1768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:39.146081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:39.146264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:46.320966 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 14:09:46.329346 systemd[1]: Started sshd@0-168.119.51.76:22-139.178.68.195:35316.service - OpenSSH per-connection server daemon (139.178.68.195:35316). Dec 13 14:09:47.330930 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 35316 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:47.333641 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:47.352555 systemd-logind[1463]: New session 1 of user core. Dec 13 14:09:47.353317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 14:09:47.358276 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 14:09:47.388484 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 14:09:47.397481 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 14:09:47.403368 (systemd)[1782]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:09:47.515304 systemd[1782]: Queued start job for default target default.target. Dec 13 14:09:47.523941 systemd[1782]: Created slice app.slice - User Application Slice. Dec 13 14:09:47.523976 systemd[1782]: Reached target paths.target - Paths. Dec 13 14:09:47.523989 systemd[1782]: Reached target timers.target - Timers. Dec 13 14:09:47.525351 systemd[1782]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 14:09:47.542104 systemd[1782]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 14:09:47.542287 systemd[1782]: Reached target sockets.target - Sockets. Dec 13 14:09:47.542312 systemd[1782]: Reached target basic.target - Basic System. Dec 13 14:09:47.542380 systemd[1782]: Reached target default.target - Main User Target. Dec 13 14:09:47.542428 systemd[1782]: Startup finished in 129ms. Dec 13 14:09:47.542452 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 14:09:47.558255 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 14:09:48.256503 systemd[1]: Started sshd@1-168.119.51.76:22-139.178.68.195:35332.service - OpenSSH per-connection server daemon (139.178.68.195:35332). Dec 13 14:09:49.193419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 14:09:49.201156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:09:49.239050 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 35332 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:49.243403 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:49.250130 systemd-logind[1463]: New session 2 of user core. Dec 13 14:09:49.258084 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 14:09:49.323163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:09:49.332466 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:09:49.378928 kubelet[1804]: E1213 14:09:49.378851 1804 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:49.381402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:49.381566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:49.925316 sshd[1793]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:49.929663 systemd[1]: sshd@1-168.119.51.76:22-139.178.68.195:35332.service: Deactivated successfully. Dec 13 14:09:49.932233 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:09:49.933719 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:09:49.935181 systemd-logind[1463]: Removed session 2. Dec 13 14:09:50.105401 systemd[1]: Started sshd@2-168.119.51.76:22-139.178.68.195:35338.service - OpenSSH per-connection server daemon (139.178.68.195:35338). Dec 13 14:09:51.099900 sshd[1816]: Accepted publickey for core from 139.178.68.195 port 35338 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:51.101944 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:51.109211 systemd-logind[1463]: New session 3 of user core. Dec 13 14:09:51.115254 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 14:09:51.784029 sshd[1816]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:51.790114 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:09:51.792306 systemd[1]: sshd@2-168.119.51.76:22-139.178.68.195:35338.service: Deactivated successfully. Dec 13 14:09:51.794125 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:09:51.795214 systemd-logind[1463]: Removed session 3. Dec 13 14:09:51.956211 systemd[1]: Started sshd@3-168.119.51.76:22-139.178.68.195:35348.service - OpenSSH per-connection server daemon (139.178.68.195:35348). Dec 13 14:09:52.929897 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 35348 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:52.932129 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:52.939194 systemd-logind[1463]: New session 4 of user core. Dec 13 14:09:52.949086 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 14:09:53.608746 sshd[1823]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:53.613216 systemd[1]: sshd@3-168.119.51.76:22-139.178.68.195:35348.service: Deactivated successfully. Dec 13 14:09:53.615554 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:09:53.617048 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:09:53.618557 systemd-logind[1463]: Removed session 4. Dec 13 14:09:53.790212 systemd[1]: Started sshd@4-168.119.51.76:22-139.178.68.195:35350.service - OpenSSH per-connection server daemon (139.178.68.195:35350). Dec 13 14:09:54.769694 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 35350 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:54.772082 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:54.780153 systemd-logind[1463]: New session 5 of user core. Dec 13 14:09:54.786126 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 14:09:55.301907 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:09:55.302167 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:09:55.318039 sudo[1833]: pam_unix(sudo:session): session closed for user root Dec 13 14:09:55.478050 sshd[1830]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:55.483567 systemd[1]: sshd@4-168.119.51.76:22-139.178.68.195:35350.service: Deactivated successfully. Dec 13 14:09:55.485830 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:09:55.487711 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:09:55.489516 systemd-logind[1463]: Removed session 5. Dec 13 14:09:55.660281 systemd[1]: Started sshd@5-168.119.51.76:22-139.178.68.195:35352.service - OpenSSH per-connection server daemon (139.178.68.195:35352). Dec 13 14:09:56.654265 sshd[1838]: Accepted publickey for core from 139.178.68.195 port 35352 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:56.656583 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:56.664532 systemd-logind[1463]: New session 6 of user core. Dec 13 14:09:56.675203 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 14:09:57.184132 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:09:57.184497 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:09:57.188970 sudo[1842]: pam_unix(sudo:session): session closed for user root Dec 13 14:09:57.195729 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:09:57.196244 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:09:57.211181 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 14:09:57.213827 auditctl[1845]: No rules Dec 13 14:09:57.214190 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:09:57.214381 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 14:09:57.217386 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 14:09:57.264128 augenrules[1863]: No rules Dec 13 14:09:57.265712 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 14:09:57.267708 sudo[1841]: pam_unix(sudo:session): session closed for user root Dec 13 14:09:57.429352 sshd[1838]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:57.433376 systemd[1]: sshd@5-168.119.51.76:22-139.178.68.195:35352.service: Deactivated successfully. Dec 13 14:09:57.435440 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:09:57.438157 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:09:57.439110 systemd-logind[1463]: Removed session 6. Dec 13 14:09:57.606250 systemd[1]: Started sshd@6-168.119.51.76:22-139.178.68.195:43908.service - OpenSSH per-connection server daemon (139.178.68.195:43908). Dec 13 14:09:58.583579 sshd[1871]: Accepted publickey for core from 139.178.68.195 port 43908 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:09:58.585721 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:58.591949 systemd-logind[1463]: New session 7 of user core. Dec 13 14:09:58.603178 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 14:09:59.109313 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:09:59.109901 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:09:59.442841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 14:09:59.453716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:09:59.470059 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 14:09:59.481232 (dockerd)[1893]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 14:09:59.592280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:09:59.598528 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:09:59.648344 kubelet[1903]: E1213 14:09:59.648249 1903 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:59.651964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:59.652123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:59.791388 dockerd[1893]: time="2024-12-13T14:09:59.791236336Z" level=info msg="Starting up" Dec 13 14:09:59.886693 dockerd[1893]: time="2024-12-13T14:09:59.886395631Z" level=info msg="Loading containers: start." Dec 13 14:09:59.997996 kernel: Initializing XFRM netlink socket Dec 13 14:10:00.067565 systemd-networkd[1376]: docker0: Link UP Dec 13 14:10:00.092936 dockerd[1893]: time="2024-12-13T14:10:00.092291958Z" level=info msg="Loading containers: done." Dec 13 14:10:00.114806 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1857436613-merged.mount: Deactivated successfully. Dec 13 14:10:00.119059 dockerd[1893]: time="2024-12-13T14:10:00.118853106Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:10:00.119059 dockerd[1893]: time="2024-12-13T14:10:00.118994470Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 14:10:00.119318 dockerd[1893]: time="2024-12-13T14:10:00.119144433Z" level=info msg="Daemon has completed initialization" Dec 13 14:10:00.153128 dockerd[1893]: time="2024-12-13T14:10:00.152948244Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:10:00.153207 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 14:10:01.362834 containerd[1486]: time="2024-12-13T14:10:01.362751060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:10:02.024916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595329541.mount: Deactivated successfully. Dec 13 14:10:04.529058 containerd[1486]: time="2024-12-13T14:10:04.528912997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:04.530128 containerd[1486]: time="2024-12-13T14:10:04.530069508Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864102" Dec 13 14:10:04.531006 containerd[1486]: time="2024-12-13T14:10:04.530941740Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:04.534676 containerd[1486]: time="2024-12-13T14:10:04.534028115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:04.536899 containerd[1486]: time="2024-12-13T14:10:04.536554093Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 3.173751672s" Dec 13 14:10:04.536899 containerd[1486]: time="2024-12-13T14:10:04.536619213Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 14:10:04.567720 containerd[1486]: time="2024-12-13T14:10:04.567675672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:10:06.791492 containerd[1486]: time="2024-12-13T14:10:06.791445517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:06.793495 containerd[1486]: time="2024-12-13T14:10:06.793437584Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900714" Dec 13 14:10:06.795033 containerd[1486]: time="2024-12-13T14:10:06.794974733Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:06.798882 containerd[1486]: time="2024-12-13T14:10:06.798790908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:06.800266 containerd[1486]: time="2024-12-13T14:10:06.800117659Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.232195509s" Dec 13 14:10:06.800266 containerd[1486]: time="2024-12-13T14:10:06.800162178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 14:10:06.819806 containerd[1486]: time="2024-12-13T14:10:06.819571328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:10:08.743540 containerd[1486]: time="2024-12-13T14:10:08.743352144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:08.745349 containerd[1486]: time="2024-12-13T14:10:08.745308533Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164352" Dec 13 14:10:08.745978 containerd[1486]: time="2024-12-13T14:10:08.745838051Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:08.749504 containerd[1486]: time="2024-12-13T14:10:08.749414672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:08.751767 containerd[1486]: time="2024-12-13T14:10:08.751558421Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.931934734s" Dec 13 14:10:08.751767 containerd[1486]: time="2024-12-13T14:10:08.751623061Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 14:10:08.774047 containerd[1486]: time="2024-12-13T14:10:08.774004145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:10:09.693381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 14:10:09.700171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:09.833161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:09.834059 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:10:09.878271 kubelet[2138]: E1213 14:10:09.877837 2138 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:09.880665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:09.880796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:10.108790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707784753.mount: Deactivated successfully. Dec 13 14:10:10.492011 containerd[1486]: time="2024-12-13T14:10:10.490831448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:10.494483 containerd[1486]: time="2024-12-13T14:10:10.494441674Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662037" Dec 13 14:10:10.496793 containerd[1486]: time="2024-12-13T14:10:10.496742626Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:10.512189 containerd[1486]: time="2024-12-13T14:10:10.512119208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:10.513430 containerd[1486]: time="2024-12-13T14:10:10.513385404Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.73933206s" Dec 13 14:10:10.513556 containerd[1486]: time="2024-12-13T14:10:10.513535843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 14:10:10.539459 containerd[1486]: time="2024-12-13T14:10:10.539380427Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:10:11.101628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089537930.mount: Deactivated successfully. Dec 13 14:10:12.077916 containerd[1486]: time="2024-12-13T14:10:12.077745808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:12.078842 containerd[1486]: time="2024-12-13T14:10:12.078810685Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 14:10:12.080104 containerd[1486]: time="2024-12-13T14:10:12.079975242Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:12.082601 containerd[1486]: time="2024-12-13T14:10:12.082547316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:12.083911 containerd[1486]: time="2024-12-13T14:10:12.083760314Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.544331568s" Dec 13 14:10:12.083911 containerd[1486]: time="2024-12-13T14:10:12.083799393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:10:12.108624 containerd[1486]: time="2024-12-13T14:10:12.108516615Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:10:12.648402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293838998.mount: Deactivated successfully. Dec 13 14:10:12.653285 containerd[1486]: time="2024-12-13T14:10:12.653191848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:12.655655 containerd[1486]: time="2024-12-13T14:10:12.655571482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Dec 13 14:10:12.657054 containerd[1486]: time="2024-12-13T14:10:12.657005599Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:12.660258 containerd[1486]: time="2024-12-13T14:10:12.660181831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:12.661121 containerd[1486]: time="2024-12-13T14:10:12.660702790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 552.145655ms" Dec 13 14:10:12.661121 containerd[1486]: time="2024-12-13T14:10:12.660736670Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:10:12.684129 containerd[1486]: time="2024-12-13T14:10:12.683846735Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:10:13.356130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352423973.mount: Deactivated successfully. Dec 13 14:10:17.522968 containerd[1486]: time="2024-12-13T14:10:17.522818624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:17.524973 containerd[1486]: time="2024-12-13T14:10:17.524922666Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Dec 13 14:10:17.525854 containerd[1486]: time="2024-12-13T14:10:17.525745946Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:17.530632 containerd[1486]: time="2024-12-13T14:10:17.530541709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:17.534063 containerd[1486]: time="2024-12-13T14:10:17.533802512Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.849899857s" Dec 13 14:10:17.534063 containerd[1486]: time="2024-12-13T14:10:17.533892072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 14:10:19.943214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Dec 13 14:10:19.951318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:20.063037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:20.068222 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:10:20.113895 kubelet[2321]: E1213 14:10:20.113222 2321 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:20.116272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:20.116534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:23.356197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:23.369448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:23.390625 systemd[1]: Reloading requested from client PID 2336 ('systemctl') (unit session-7.scope)... Dec 13 14:10:23.390780 systemd[1]: Reloading... Dec 13 14:10:23.503097 zram_generator::config[2376]: No configuration found. Dec 13 14:10:23.603625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:10:23.670306 systemd[1]: Reloading finished in 279 ms. Dec 13 14:10:23.729928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:23.733686 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:23.739723 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:10:23.739978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:23.745350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:23.863101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:23.877564 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:10:23.932428 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:10:23.932752 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:10:23.932801 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:10:23.933046 kubelet[2426]: I1213 14:10:23.933013 2426 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:10:24.758537 kubelet[2426]: I1213 14:10:24.758480 2426 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:10:24.758537 kubelet[2426]: I1213 14:10:24.758543 2426 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:10:24.759223 kubelet[2426]: I1213 14:10:24.759174 2426 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:10:24.803891 kubelet[2426]: E1213 14:10:24.801603 2426 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://168.119.51.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.803891 kubelet[2426]: I1213 14:10:24.802833 2426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:10:24.811753 kubelet[2426]: I1213 14:10:24.811730 2426 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:10:24.813314 kubelet[2426]: I1213 14:10:24.813270 2426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:10:24.813579 kubelet[2426]: I1213 14:10:24.813405 2426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-a-7dfc9bce8d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:10:24.813760 kubelet[2426]: I1213 14:10:24.813747 2426 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:10:24.813815 kubelet[2426]: I1213 14:10:24.813806 2426 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:10:24.814155 kubelet[2426]: I1213 14:10:24.814141 2426 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:10:24.817269 kubelet[2426]: I1213 14:10:24.817246 2426 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:10:24.817976 kubelet[2426]: I1213 14:10:24.817907 2426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:10:24.819887 kubelet[2426]: I1213 14:10:24.818090 2426 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:10:24.819887 kubelet[2426]: I1213 14:10:24.818108 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:10:24.819887 kubelet[2426]: I1213 14:10:24.819582 2426 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 14:10:24.820033 kubelet[2426]: I1213 14:10:24.820008 2426 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:10:24.820094 kubelet[2426]: W1213 14:10:24.820073 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:10:24.820905 kubelet[2426]: I1213 14:10:24.820882 2426 server.go:1264] "Started kubelet" Dec 13 14:10:24.821059 kubelet[2426]: W1213 14:10:24.821015 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.51.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.821288 kubelet[2426]: E1213 14:10:24.821260 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.51.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.821372 kubelet[2426]: W1213 14:10:24.821337 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.51.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-7dfc9bce8d&limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.821404 kubelet[2426]: E1213 14:10:24.821381 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.51.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-7dfc9bce8d&limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.827102 kubelet[2426]: I1213 14:10:24.827062 2426 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:10:24.827875 kubelet[2426]: I1213 14:10:24.827804 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:10:24.828249 kubelet[2426]: I1213 14:10:24.828228 2426 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:10:24.829217 kubelet[2426]: I1213 14:10:24.829189 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:10:24.831940 kubelet[2426]: I1213 14:10:24.831920 2426 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:10:24.833198 kubelet[2426]: E1213 14:10:24.833021 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.51.76:6443/api/v1/namespaces/default/events\": dial tcp 168.119.51.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-a-7dfc9bce8d.1810c1e01bcc2b69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-a-7dfc9bce8d,UID:ci-4081-2-1-a-7dfc9bce8d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-a-7dfc9bce8d,},FirstTimestamp:2024-12-13 14:10:24.820841321 +0000 UTC m=+0.938023327,LastTimestamp:2024-12-13 14:10:24.820841321 +0000 UTC m=+0.938023327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-a-7dfc9bce8d,}" Dec 13 14:10:24.833713 kubelet[2426]: I1213 14:10:24.833697 2426 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:10:24.833908 kubelet[2426]: I1213 14:10:24.833894 2426 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:10:24.835006 kubelet[2426]: I1213 14:10:24.834988 2426 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:10:24.835488 kubelet[2426]: W1213 14:10:24.835447 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.51.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.835587 kubelet[2426]: E1213 14:10:24.835575 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.51.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.837449 kubelet[2426]: E1213 14:10:24.837410 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-7dfc9bce8d?timeout=10s\": dial tcp 168.119.51.76:6443: connect: connection refused" interval="200ms" Dec 13 14:10:24.837750 kubelet[2426]: I1213 14:10:24.837730 2426 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:10:24.837936 kubelet[2426]: I1213 14:10:24.837919 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:10:24.838323 kubelet[2426]: E1213 14:10:24.838306 2426 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:10:24.840638 kubelet[2426]: I1213 14:10:24.840618 2426 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:10:24.851060 kubelet[2426]: I1213 14:10:24.850981 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:10:24.852171 kubelet[2426]: I1213 14:10:24.852125 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:10:24.852245 kubelet[2426]: I1213 14:10:24.852183 2426 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:10:24.852245 kubelet[2426]: I1213 14:10:24.852213 2426 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:10:24.852314 kubelet[2426]: E1213 14:10:24.852280 2426 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:10:24.860390 kubelet[2426]: W1213 14:10:24.860287 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.51.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.860390 kubelet[2426]: E1213 14:10:24.860369 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.51.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:24.872604 kubelet[2426]: I1213 14:10:24.872354 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:10:24.872604 kubelet[2426]: I1213 14:10:24.872372 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:10:24.872604 kubelet[2426]: I1213 14:10:24.872389 2426 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:10:24.874356 kubelet[2426]: I1213 14:10:24.874269 2426 policy_none.go:49] "None policy: Start" Dec 13 14:10:24.875217 kubelet[2426]: I1213 14:10:24.874886 2426 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:10:24.875217 kubelet[2426]: I1213 14:10:24.874912 2426 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:10:24.880643 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 14:10:24.894019 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 14:10:24.898593 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 14:10:24.905213 kubelet[2426]: I1213 14:10:24.905139 2426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:10:24.906164 kubelet[2426]: I1213 14:10:24.905922 2426 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:10:24.906494 kubelet[2426]: I1213 14:10:24.906404 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:10:24.909052 kubelet[2426]: E1213 14:10:24.909029 2426 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-a-7dfc9bce8d\" not found" Dec 13 14:10:24.937296 kubelet[2426]: I1213 14:10:24.937226 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:24.938075 kubelet[2426]: E1213 14:10:24.937750 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.76:6443/api/v1/nodes\": dial tcp 168.119.51.76:6443: connect: connection refused" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:24.953544 kubelet[2426]: I1213 14:10:24.953457 2426 topology_manager.go:215] "Topology Admit Handler" podUID="de35788c7594c955510759696d9a1ada" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:24.957047 kubelet[2426]: I1213 14:10:24.956912 2426 topology_manager.go:215] "Topology Admit Handler" podUID="80d9734339c36dd4b0f581c2642189cc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:24.959222 kubelet[2426]: I1213 14:10:24.959102 2426 topology_manager.go:215] "Topology Admit Handler" podUID="78fd933204f0addd68073901ef4162a1" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:24.969599 systemd[1]: Created slice kubepods-burstable-podde35788c7594c955510759696d9a1ada.slice - libcontainer container kubepods-burstable-podde35788c7594c955510759696d9a1ada.slice. Dec 13 14:10:25.001916 systemd[1]: Created slice kubepods-burstable-pod80d9734339c36dd4b0f581c2642189cc.slice - libcontainer container kubepods-burstable-pod80d9734339c36dd4b0f581c2642189cc.slice. Dec 13 14:10:25.008595 systemd[1]: Created slice kubepods-burstable-pod78fd933204f0addd68073901ef4162a1.slice - libcontainer container kubepods-burstable-pod78fd933204f0addd68073901ef4162a1.slice. Dec 13 14:10:25.037355 kubelet[2426]: I1213 14:10:25.036790 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037355 kubelet[2426]: I1213 14:10:25.036888 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037355 kubelet[2426]: I1213 14:10:25.036950 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78fd933204f0addd68073901ef4162a1-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"78fd933204f0addd68073901ef4162a1\") " pod="kube-system/kube-scheduler-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037355 kubelet[2426]: I1213 14:10:25.036988 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de35788c7594c955510759696d9a1ada-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"de35788c7594c955510759696d9a1ada\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037355 kubelet[2426]: I1213 14:10:25.037032 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de35788c7594c955510759696d9a1ada-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"de35788c7594c955510759696d9a1ada\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037775 kubelet[2426]: I1213 14:10:25.037071 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037775 kubelet[2426]: I1213 14:10:25.037105 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037775 kubelet[2426]: I1213 14:10:25.037148 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.037775 kubelet[2426]: I1213 14:10:25.037187 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de35788c7594c955510759696d9a1ada-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"de35788c7594c955510759696d9a1ada\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.038259 kubelet[2426]: E1213 14:10:25.038202 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-7dfc9bce8d?timeout=10s\": dial tcp 168.119.51.76:6443: connect: connection refused" interval="400ms" Dec 13 14:10:25.140425 kubelet[2426]: I1213 14:10:25.139919 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.140425 kubelet[2426]: E1213 14:10:25.140365 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.76:6443/api/v1/nodes\": dial tcp 168.119.51.76:6443: connect: connection refused" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.296883 containerd[1486]: time="2024-12-13T14:10:25.296685951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-a-7dfc9bce8d,Uid:de35788c7594c955510759696d9a1ada,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:25.306582 containerd[1486]: time="2024-12-13T14:10:25.306427957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d,Uid:80d9734339c36dd4b0f581c2642189cc,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:25.314213 containerd[1486]: time="2024-12-13T14:10:25.313569630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-a-7dfc9bce8d,Uid:78fd933204f0addd68073901ef4162a1,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:25.439735 kubelet[2426]: E1213 14:10:25.439547 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-7dfc9bce8d?timeout=10s\": dial tcp 168.119.51.76:6443: connect: connection refused" interval="800ms" Dec 13 14:10:25.543719 kubelet[2426]: I1213 14:10:25.543653 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.544115 kubelet[2426]: E1213 14:10:25.544080 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.76:6443/api/v1/nodes\": dial tcp 168.119.51.76:6443: connect: connection refused" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:25.758009 kubelet[2426]: W1213 14:10:25.757907 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.51.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-7dfc9bce8d&limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:25.758201 kubelet[2426]: E1213 14:10:25.758039 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.51.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-7dfc9bce8d&limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:25.853287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358087566.mount: Deactivated successfully. Dec 13 14:10:25.858707 containerd[1486]: time="2024-12-13T14:10:25.858653225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:10:25.860339 containerd[1486]: time="2024-12-13T14:10:25.860297153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 14:10:25.862927 containerd[1486]: time="2024-12-13T14:10:25.862673004Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:10:25.864688 containerd[1486]: time="2024-12-13T14:10:25.864584733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:10:25.866142 containerd[1486]: time="2024-12-13T14:10:25.866105100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:10:25.866641 containerd[1486]: time="2024-12-13T14:10:25.866420102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:10:25.867181 containerd[1486]: time="2024-12-13T14:10:25.867150465Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:10:25.868506 containerd[1486]: time="2024-12-13T14:10:25.868449911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:10:25.871139 containerd[1486]: time="2024-12-13T14:10:25.870836162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.00933ms" Dec 13 14:10:25.872845 containerd[1486]: time="2024-12-13T14:10:25.872731651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.206854ms" Dec 13 14:10:25.874391 kubelet[2426]: W1213 14:10:25.874261 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.51.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:25.874391 kubelet[2426]: E1213 14:10:25.874341 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.51.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:25.876977 containerd[1486]: time="2024-12-13T14:10:25.876938991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.743518ms" Dec 13 14:10:25.909107 kubelet[2426]: W1213 14:10:25.909051 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.51.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:25.909107 kubelet[2426]: E1213 14:10:25.909113 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.51.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:25.992122 containerd[1486]: time="2024-12-13T14:10:25.991441808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:25.992122 containerd[1486]: time="2024-12-13T14:10:25.991512088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:25.992122 containerd[1486]: time="2024-12-13T14:10:25.991526728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:25.994504 containerd[1486]: time="2024-12-13T14:10:25.993761018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:25.995078 containerd[1486]: time="2024-12-13T14:10:25.994755463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:25.995078 containerd[1486]: time="2024-12-13T14:10:25.994807943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:25.995078 containerd[1486]: time="2024-12-13T14:10:25.994823783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:25.995078 containerd[1486]: time="2024-12-13T14:10:25.994914624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:26.001677 containerd[1486]: time="2024-12-13T14:10:26.000848332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:26.001677 containerd[1486]: time="2024-12-13T14:10:26.000949292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:26.001677 containerd[1486]: time="2024-12-13T14:10:26.000965172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:26.001677 containerd[1486]: time="2024-12-13T14:10:26.001041813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:26.019178 systemd[1]: Started cri-containerd-cabe4f3ff2011c993c066c875831f64d09060dc548e4b8d828f6168dd32fc3bb.scope - libcontainer container cabe4f3ff2011c993c066c875831f64d09060dc548e4b8d828f6168dd32fc3bb. Dec 13 14:10:26.023924 systemd[1]: Started cri-containerd-10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385.scope - libcontainer container 10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385. Dec 13 14:10:26.028038 systemd[1]: Started cri-containerd-a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2.scope - libcontainer container a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2. Dec 13 14:10:26.086234 containerd[1486]: time="2024-12-13T14:10:26.086063848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-a-7dfc9bce8d,Uid:de35788c7594c955510759696d9a1ada,Namespace:kube-system,Attempt:0,} returns sandbox id \"cabe4f3ff2011c993c066c875831f64d09060dc548e4b8d828f6168dd32fc3bb\"" Dec 13 14:10:26.093477 containerd[1486]: time="2024-12-13T14:10:26.093342925Z" level=info msg="CreateContainer within sandbox \"cabe4f3ff2011c993c066c875831f64d09060dc548e4b8d828f6168dd32fc3bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:10:26.097482 containerd[1486]: time="2024-12-13T14:10:26.097383106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d,Uid:80d9734339c36dd4b0f581c2642189cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2\"" Dec 13 14:10:26.100244 kubelet[2426]: W1213 14:10:26.100190 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.51.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:26.100244 kubelet[2426]: E1213 14:10:26.100255 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.51.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.76:6443: connect: connection refused Dec 13 14:10:26.105017 containerd[1486]: time="2024-12-13T14:10:26.104454262Z" level=info msg="CreateContainer within sandbox \"a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:10:26.112135 containerd[1486]: time="2024-12-13T14:10:26.112091701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-a-7dfc9bce8d,Uid:78fd933204f0addd68073901ef4162a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385\"" Dec 13 14:10:26.114824 containerd[1486]: time="2024-12-13T14:10:26.114788515Z" level=info msg="CreateContainer within sandbox \"10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:10:26.119516 containerd[1486]: time="2024-12-13T14:10:26.119457219Z" level=info msg="CreateContainer within sandbox \"cabe4f3ff2011c993c066c875831f64d09060dc548e4b8d828f6168dd32fc3bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1f004b3db30e7212495f9dbd122c8d0f5abfda82197ff038832416148f5c4bc\"" Dec 13 14:10:26.120350 containerd[1486]: time="2024-12-13T14:10:26.120326183Z" level=info msg="StartContainer for \"a1f004b3db30e7212495f9dbd122c8d0f5abfda82197ff038832416148f5c4bc\"" Dec 13 14:10:26.126163 containerd[1486]: time="2024-12-13T14:10:26.125989892Z" level=info msg="CreateContainer within sandbox \"a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0\"" Dec 13 14:10:26.127010 containerd[1486]: time="2024-12-13T14:10:26.126980937Z" level=info msg="StartContainer for \"f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0\"" Dec 13 14:10:26.135718 containerd[1486]: time="2024-12-13T14:10:26.135556021Z" level=info msg="CreateContainer within sandbox \"10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6\"" Dec 13 14:10:26.137257 containerd[1486]: time="2024-12-13T14:10:26.136988548Z" level=info msg="StartContainer for \"6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6\"" Dec 13 14:10:26.155221 systemd[1]: Started cri-containerd-a1f004b3db30e7212495f9dbd122c8d0f5abfda82197ff038832416148f5c4bc.scope - libcontainer container a1f004b3db30e7212495f9dbd122c8d0f5abfda82197ff038832416148f5c4bc. Dec 13 14:10:26.167067 systemd[1]: Started cri-containerd-f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0.scope - libcontainer container f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0. Dec 13 14:10:26.179060 systemd[1]: Started cri-containerd-6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6.scope - libcontainer container 6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6. Dec 13 14:10:26.226495 containerd[1486]: time="2024-12-13T14:10:26.226259285Z" level=info msg="StartContainer for \"a1f004b3db30e7212495f9dbd122c8d0f5abfda82197ff038832416148f5c4bc\" returns successfully" Dec 13 14:10:26.245085 kubelet[2426]: E1213 14:10:26.241282 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-7dfc9bce8d?timeout=10s\": dial tcp 168.119.51.76:6443: connect: connection refused" interval="1.6s" Dec 13 14:10:26.299755 containerd[1486]: time="2024-12-13T14:10:26.299638501Z" level=info msg="StartContainer for \"6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6\" returns successfully" Dec 13 14:10:26.300357 containerd[1486]: time="2024-12-13T14:10:26.299895222Z" level=info msg="StartContainer for \"f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0\" returns successfully" Dec 13 14:10:26.350262 kubelet[2426]: I1213 14:10:26.349670 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:26.350262 kubelet[2426]: E1213 14:10:26.350022 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.76:6443/api/v1/nodes\": dial tcp 168.119.51.76:6443: connect: connection refused" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:27.952708 kubelet[2426]: I1213 14:10:27.952089 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:28.639346 kubelet[2426]: E1213 14:10:28.639298 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-a-7dfc9bce8d\" not found" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:28.664928 kubelet[2426]: I1213 14:10:28.662770 2426 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:28.820917 kubelet[2426]: I1213 14:10:28.820881 2426 apiserver.go:52] "Watching apiserver" Dec 13 14:10:28.834389 kubelet[2426]: I1213 14:10:28.834324 2426 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:10:30.841773 systemd[1]: Reloading requested from client PID 2694 ('systemctl') (unit session-7.scope)... Dec 13 14:10:30.841797 systemd[1]: Reloading... Dec 13 14:10:30.949887 zram_generator::config[2737]: No configuration found. Dec 13 14:10:31.045837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:10:31.130159 systemd[1]: Reloading finished in 287 ms. Dec 13 14:10:31.169956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:31.179278 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:10:31.179639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:31.179771 systemd[1]: kubelet.service: Consumed 1.361s CPU time, 113.4M memory peak, 0B memory swap peak. Dec 13 14:10:31.192543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:31.295917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:31.301316 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:10:31.352329 kubelet[2779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:10:31.353207 kubelet[2779]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:10:31.353207 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:10:31.353207 kubelet[2779]: I1213 14:10:31.352433 2779 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:10:31.357252 kubelet[2779]: I1213 14:10:31.357225 2779 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:10:31.357390 kubelet[2779]: I1213 14:10:31.357380 2779 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:10:31.357627 kubelet[2779]: I1213 14:10:31.357611 2779 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:10:31.359287 kubelet[2779]: I1213 14:10:31.359261 2779 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:10:31.361012 kubelet[2779]: I1213 14:10:31.360823 2779 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:10:31.370911 kubelet[2779]: I1213 14:10:31.370885 2779 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:10:31.371132 kubelet[2779]: I1213 14:10:31.371104 2779 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:10:31.371304 kubelet[2779]: I1213 14:10:31.371133 2779 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-a-7dfc9bce8d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:10:31.371378 kubelet[2779]: I1213 14:10:31.371310 2779 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:10:31.371378 kubelet[2779]: I1213 14:10:31.371319 2779 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:10:31.371378 kubelet[2779]: I1213 14:10:31.371353 2779 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:10:31.371468 kubelet[2779]: I1213 14:10:31.371453 2779 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:10:31.372543 kubelet[2779]: I1213 14:10:31.371470 2779 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:10:31.372543 kubelet[2779]: I1213 14:10:31.371501 2779 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:10:31.372543 kubelet[2779]: I1213 14:10:31.371518 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:10:31.373080 kubelet[2779]: I1213 14:10:31.373059 2779 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 14:10:31.373232 kubelet[2779]: I1213 14:10:31.373214 2779 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:10:31.373612 kubelet[2779]: I1213 14:10:31.373582 2779 server.go:1264] "Started kubelet" Dec 13 14:10:31.375675 kubelet[2779]: I1213 14:10:31.375645 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:10:31.388377 kubelet[2779]: I1213 14:10:31.387494 2779 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:10:31.390964 kubelet[2779]: I1213 14:10:31.390764 2779 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:10:31.399584 kubelet[2779]: I1213 14:10:31.391039 2779 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:10:31.413176 kubelet[2779]: I1213 14:10:31.391081 2779 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:10:31.413482 kubelet[2779]: I1213 14:10:31.413466 2779 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:10:31.413544 kubelet[2779]: I1213 14:10:31.408003 2779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:10:31.414799 kubelet[2779]: I1213 14:10:31.414775 2779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:10:31.414946 kubelet[2779]: I1213 14:10:31.414935 2779 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:10:31.415101 kubelet[2779]: I1213 14:10:31.415029 2779 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:10:31.415234 kubelet[2779]: E1213 14:10:31.415208 2779 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:10:31.415655 kubelet[2779]: E1213 14:10:31.415608 2779 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:10:31.416173 kubelet[2779]: I1213 14:10:31.387654 2779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:10:31.416173 kubelet[2779]: I1213 14:10:31.416001 2779 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:10:31.416173 kubelet[2779]: I1213 14:10:31.416013 2779 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:10:31.416173 kubelet[2779]: I1213 14:10:31.416029 2779 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:10:31.416173 kubelet[2779]: I1213 14:10:31.416102 2779 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:10:31.480177 kubelet[2779]: I1213 14:10:31.480130 2779 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:10:31.480177 kubelet[2779]: I1213 14:10:31.480152 2779 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:10:31.480177 kubelet[2779]: I1213 14:10:31.480172 2779 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:10:31.480397 kubelet[2779]: I1213 14:10:31.480335 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:10:31.480435 kubelet[2779]: I1213 14:10:31.480395 2779 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:10:31.480435 kubelet[2779]: I1213 14:10:31.480417 2779 policy_none.go:49] "None policy: Start" Dec 13 14:10:31.482339 kubelet[2779]: I1213 14:10:31.481335 2779 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:10:31.482339 kubelet[2779]: I1213 14:10:31.481360 2779 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:10:31.482339 kubelet[2779]: I1213 14:10:31.481570 2779 state_mem.go:75] "Updated machine memory state" Dec 13 14:10:31.486684 kubelet[2779]: I1213 14:10:31.486655 2779 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:10:31.486890 kubelet[2779]: I1213 14:10:31.486831 2779 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:10:31.486968 kubelet[2779]: I1213 14:10:31.486953 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:10:31.499166 kubelet[2779]: I1213 14:10:31.499139 2779 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.508940 kubelet[2779]: I1213 14:10:31.508819 2779 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.508940 kubelet[2779]: I1213 14:10:31.508934 2779 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.516409 kubelet[2779]: I1213 14:10:31.515839 2779 topology_manager.go:215] "Topology Admit Handler" podUID="de35788c7594c955510759696d9a1ada" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.516409 kubelet[2779]: I1213 14:10:31.516008 2779 topology_manager.go:215] "Topology Admit Handler" podUID="80d9734339c36dd4b0f581c2642189cc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.516409 kubelet[2779]: I1213 14:10:31.516050 2779 topology_manager.go:215] "Topology Admit Handler" podUID="78fd933204f0addd68073901ef4162a1" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.532780 kubelet[2779]: E1213 14:10:31.532528 2779 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.614519 kubelet[2779]: I1213 14:10:31.613989 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de35788c7594c955510759696d9a1ada-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"de35788c7594c955510759696d9a1ada\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.614519 kubelet[2779]: I1213 14:10:31.614047 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.614877 kubelet[2779]: I1213 14:10:31.614080 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.615136 kubelet[2779]: I1213 14:10:31.614951 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78fd933204f0addd68073901ef4162a1-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"78fd933204f0addd68073901ef4162a1\") " pod="kube-system/kube-scheduler-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.615238 kubelet[2779]: I1213 14:10:31.615168 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de35788c7594c955510759696d9a1ada-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"de35788c7594c955510759696d9a1ada\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.615238 kubelet[2779]: I1213 14:10:31.615202 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de35788c7594c955510759696d9a1ada-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"de35788c7594c955510759696d9a1ada\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.615238 kubelet[2779]: I1213 14:10:31.615232 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.615384 kubelet[2779]: I1213 14:10:31.615262 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.615384 kubelet[2779]: I1213 14:10:31.615294 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80d9734339c36dd4b0f581c2642189cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d\" (UID: \"80d9734339c36dd4b0f581c2642189cc\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" Dec 13 14:10:31.832822 sudo[2814]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:10:31.833172 sudo[2814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 14:10:32.273538 sudo[2814]: pam_unix(sudo:session): session closed for user root Dec 13 14:10:32.372404 kubelet[2779]: I1213 14:10:32.372356 2779 apiserver.go:52] "Watching apiserver" Dec 13 14:10:32.414126 kubelet[2779]: I1213 14:10:32.414080 2779 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:10:32.483504 kubelet[2779]: I1213 14:10:32.482449 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-a-7dfc9bce8d" podStartSLOduration=1.482432794 podStartE2EDuration="1.482432794s" podCreationTimestamp="2024-12-13 14:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:10:32.470889028 +0000 UTC m=+1.165426380" watchObservedRunningTime="2024-12-13 14:10:32.482432794 +0000 UTC m=+1.176970106" Dec 13 14:10:32.493520 kubelet[2779]: I1213 14:10:32.493373 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-a-7dfc9bce8d" podStartSLOduration=2.493354595 podStartE2EDuration="2.493354595s" podCreationTimestamp="2024-12-13 14:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:10:32.482630035 +0000 UTC m=+1.177167387" watchObservedRunningTime="2024-12-13 14:10:32.493354595 +0000 UTC m=+1.187891947" Dec 13 14:10:32.494536 kubelet[2779]: I1213 14:10:32.494482 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-a-7dfc9bce8d" podStartSLOduration=1.494469644 podStartE2EDuration="1.494469644s" podCreationTimestamp="2024-12-13 14:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:10:32.494298202 +0000 UTC m=+1.188835554" watchObservedRunningTime="2024-12-13 14:10:32.494469644 +0000 UTC m=+1.189007036" Dec 13 14:10:33.972046 sudo[1874]: pam_unix(sudo:session): session closed for user root Dec 13 14:10:34.132148 sshd[1871]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:34.138151 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:10:34.138539 systemd[1]: sshd@6-168.119.51.76:22-139.178.68.195:43908.service: Deactivated successfully. Dec 13 14:10:34.141734 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:10:34.142852 systemd[1]: session-7.scope: Consumed 8.002s CPU time, 188.1M memory peak, 0B memory swap peak. Dec 13 14:10:34.144448 systemd-logind[1463]: Removed session 7. Dec 13 14:10:45.920192 kubelet[2779]: I1213 14:10:45.920060 2779 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:10:45.921731 containerd[1486]: time="2024-12-13T14:10:45.921375455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:10:45.922255 kubelet[2779]: I1213 14:10:45.921933 2779 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:10:46.903410 kubelet[2779]: I1213 14:10:46.903351 2779 topology_manager.go:215] "Topology Admit Handler" podUID="f7c24533-7001-4917-925d-2e6ece399d60" podNamespace="kube-system" podName="kube-proxy-c2j8c" Dec 13 14:10:46.909037 kubelet[2779]: W1213 14:10:46.908933 2779 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-a-7dfc9bce8d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-a-7dfc9bce8d' and this object Dec 13 14:10:46.909037 kubelet[2779]: E1213 14:10:46.908988 2779 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-a-7dfc9bce8d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-a-7dfc9bce8d' and this object Dec 13 14:10:46.909037 kubelet[2779]: W1213 14:10:46.908939 2779 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-2-1-a-7dfc9bce8d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-a-7dfc9bce8d' and this object Dec 13 14:10:46.909037 kubelet[2779]: E1213 14:10:46.909007 2779 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-2-1-a-7dfc9bce8d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-a-7dfc9bce8d' and this object Dec 13 14:10:46.914521 kubelet[2779]: I1213 14:10:46.913144 2779 topology_manager.go:215] "Topology Admit Handler" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" podNamespace="kube-system" podName="cilium-zmgtb" Dec 13 14:10:46.914521 kubelet[2779]: I1213 14:10:46.913417 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7c24533-7001-4917-925d-2e6ece399d60-lib-modules\") pod \"kube-proxy-c2j8c\" (UID: \"f7c24533-7001-4917-925d-2e6ece399d60\") " pod="kube-system/kube-proxy-c2j8c" Dec 13 14:10:46.914521 kubelet[2779]: I1213 14:10:46.913442 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7c24533-7001-4917-925d-2e6ece399d60-kube-proxy\") pod \"kube-proxy-c2j8c\" (UID: \"f7c24533-7001-4917-925d-2e6ece399d60\") " pod="kube-system/kube-proxy-c2j8c" Dec 13 14:10:46.914521 kubelet[2779]: I1213 14:10:46.913460 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7c24533-7001-4917-925d-2e6ece399d60-xtables-lock\") pod \"kube-proxy-c2j8c\" (UID: \"f7c24533-7001-4917-925d-2e6ece399d60\") " pod="kube-system/kube-proxy-c2j8c" Dec 13 14:10:46.914521 kubelet[2779]: I1213 14:10:46.913476 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsprv\" (UniqueName: \"kubernetes.io/projected/f7c24533-7001-4917-925d-2e6ece399d60-kube-api-access-vsprv\") pod \"kube-proxy-c2j8c\" (UID: \"f7c24533-7001-4917-925d-2e6ece399d60\") " pod="kube-system/kube-proxy-c2j8c" Dec 13 14:10:46.913930 systemd[1]: Created slice kubepods-besteffort-podf7c24533_7001_4917_925d_2e6ece399d60.slice - libcontainer container kubepods-besteffort-podf7c24533_7001_4917_925d_2e6ece399d60.slice. Dec 13 14:10:46.933823 systemd[1]: Created slice kubepods-burstable-pod1a43c596_cf6e_4ef0_aad5_55fc345d4d33.slice - libcontainer container kubepods-burstable-pod1a43c596_cf6e_4ef0_aad5_55fc345d4d33.slice. Dec 13 14:10:47.014813 kubelet[2779]: I1213 14:10:47.014213 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-clustermesh-secrets\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.014813 kubelet[2779]: I1213 14:10:47.014277 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tgdl\" (UniqueName: \"kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-kube-api-access-2tgdl\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.014813 kubelet[2779]: I1213 14:10:47.014307 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-cgroup\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.014813 kubelet[2779]: I1213 14:10:47.014353 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-run\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.014813 kubelet[2779]: I1213 14:10:47.014384 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cni-path\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.014813 kubelet[2779]: I1213 14:10:47.014414 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-kernel\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.015599 kubelet[2779]: I1213 14:10:47.014443 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hostproc\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.015599 kubelet[2779]: I1213 14:10:47.014488 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-xtables-lock\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.015599 kubelet[2779]: I1213 14:10:47.014532 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-config-path\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.015599 kubelet[2779]: I1213 14:10:47.014585 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-bpf-maps\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.015599 kubelet[2779]: I1213 14:10:47.014612 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-net\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.015599 kubelet[2779]: I1213 14:10:47.014728 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-lib-modules\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.016041 kubelet[2779]: I1213 14:10:47.014757 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hubble-tls\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.016041 kubelet[2779]: I1213 14:10:47.014971 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-etc-cni-netd\") pod \"cilium-zmgtb\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " pod="kube-system/cilium-zmgtb" Dec 13 14:10:47.048841 kubelet[2779]: I1213 14:10:47.048359 2779 topology_manager.go:215] "Topology Admit Handler" podUID="cfd11f4c-33a7-49d9-973a-5796bd640759" podNamespace="kube-system" podName="cilium-operator-599987898-q57wl" Dec 13 14:10:47.057875 systemd[1]: Created slice kubepods-besteffort-podcfd11f4c_33a7_49d9_973a_5796bd640759.slice - libcontainer container kubepods-besteffort-podcfd11f4c_33a7_49d9_973a_5796bd640759.slice. Dec 13 14:10:47.115705 kubelet[2779]: I1213 14:10:47.115630 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6z6v\" (UniqueName: \"kubernetes.io/projected/cfd11f4c-33a7-49d9-973a-5796bd640759-kube-api-access-j6z6v\") pod \"cilium-operator-599987898-q57wl\" (UID: \"cfd11f4c-33a7-49d9-973a-5796bd640759\") " pod="kube-system/cilium-operator-599987898-q57wl" Dec 13 14:10:47.115954 kubelet[2779]: I1213 14:10:47.115730 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfd11f4c-33a7-49d9-973a-5796bd640759-cilium-config-path\") pod \"cilium-operator-599987898-q57wl\" (UID: \"cfd11f4c-33a7-49d9-973a-5796bd640759\") " pod="kube-system/cilium-operator-599987898-q57wl" Dec 13 14:10:47.839571 containerd[1486]: time="2024-12-13T14:10:47.839511373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmgtb,Uid:1a43c596-cf6e-4ef0-aad5-55fc345d4d33,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:47.865946 containerd[1486]: time="2024-12-13T14:10:47.865807439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:47.865946 containerd[1486]: time="2024-12-13T14:10:47.865903200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:47.866251 containerd[1486]: time="2024-12-13T14:10:47.865935921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:47.866251 containerd[1486]: time="2024-12-13T14:10:47.866032442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:47.884086 systemd[1]: Started cri-containerd-c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c.scope - libcontainer container c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c. Dec 13 14:10:47.911697 containerd[1486]: time="2024-12-13T14:10:47.911606211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmgtb,Uid:1a43c596-cf6e-4ef0-aad5-55fc345d4d33,Namespace:kube-system,Attempt:0,} returns sandbox id \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\"" Dec 13 14:10:47.913685 containerd[1486]: time="2024-12-13T14:10:47.913644515Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:10:47.963168 containerd[1486]: time="2024-12-13T14:10:47.963104170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-q57wl,Uid:cfd11f4c-33a7-49d9-973a-5796bd640759,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:47.987561 containerd[1486]: time="2024-12-13T14:10:47.987238331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:47.987561 containerd[1486]: time="2024-12-13T14:10:47.987420573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:47.987778 containerd[1486]: time="2024-12-13T14:10:47.987656215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:47.989794 containerd[1486]: time="2024-12-13T14:10:47.988616747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:48.006034 systemd[1]: Started cri-containerd-03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522.scope - libcontainer container 03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522. Dec 13 14:10:48.037903 containerd[1486]: time="2024-12-13T14:10:48.037826087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-q57wl,Uid:cfd11f4c-33a7-49d9-973a-5796bd640759,Namespace:kube-system,Attempt:0,} returns sandbox id \"03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522\"" Dec 13 14:10:48.132298 containerd[1486]: time="2024-12-13T14:10:48.132116123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2j8c,Uid:f7c24533-7001-4917-925d-2e6ece399d60,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:48.158503 containerd[1486]: time="2024-12-13T14:10:48.158428514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:48.158665 containerd[1486]: time="2024-12-13T14:10:48.158479075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:48.158665 containerd[1486]: time="2024-12-13T14:10:48.158489875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:48.158665 containerd[1486]: time="2024-12-13T14:10:48.158601236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:48.187200 systemd[1]: Started cri-containerd-cb5b60d633a6792176ca9cbedc5cfebe39669b443488152da538fa5efbadf031.scope - libcontainer container cb5b60d633a6792176ca9cbedc5cfebe39669b443488152da538fa5efbadf031. Dec 13 14:10:48.210593 containerd[1486]: time="2024-12-13T14:10:48.210559451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2j8c,Uid:f7c24533-7001-4917-925d-2e6ece399d60,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb5b60d633a6792176ca9cbedc5cfebe39669b443488152da538fa5efbadf031\"" Dec 13 14:10:48.215550 containerd[1486]: time="2024-12-13T14:10:48.215377108Z" level=info msg="CreateContainer within sandbox \"cb5b60d633a6792176ca9cbedc5cfebe39669b443488152da538fa5efbadf031\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:10:48.231227 containerd[1486]: time="2024-12-13T14:10:48.231123055Z" level=info msg="CreateContainer within sandbox \"cb5b60d633a6792176ca9cbedc5cfebe39669b443488152da538fa5efbadf031\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a2bb5a819030f2d762b2b8953246402ac935cf856a417023fd2a989cb0efe26b\"" Dec 13 14:10:48.231905 containerd[1486]: time="2024-12-13T14:10:48.231816183Z" level=info msg="StartContainer for \"a2bb5a819030f2d762b2b8953246402ac935cf856a417023fd2a989cb0efe26b\"" Dec 13 14:10:48.256047 systemd[1]: Started cri-containerd-a2bb5a819030f2d762b2b8953246402ac935cf856a417023fd2a989cb0efe26b.scope - libcontainer container a2bb5a819030f2d762b2b8953246402ac935cf856a417023fd2a989cb0efe26b. Dec 13 14:10:48.293728 containerd[1486]: time="2024-12-13T14:10:48.293651635Z" level=info msg="StartContainer for \"a2bb5a819030f2d762b2b8953246402ac935cf856a417023fd2a989cb0efe26b\" returns successfully" Dec 13 14:10:51.436296 kubelet[2779]: I1213 14:10:51.435516 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c2j8c" podStartSLOduration=5.435499108 podStartE2EDuration="5.435499108s" podCreationTimestamp="2024-12-13 14:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:10:48.515685023 +0000 UTC m=+17.210222415" watchObservedRunningTime="2024-12-13 14:10:51.435499108 +0000 UTC m=+20.130036460" Dec 13 14:10:51.979316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225941209.mount: Deactivated successfully. Dec 13 14:10:53.343059 containerd[1486]: time="2024-12-13T14:10:53.342988197Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:53.344409 containerd[1486]: time="2024-12-13T14:10:53.344360775Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651538" Dec 13 14:10:53.345997 containerd[1486]: time="2024-12-13T14:10:53.345959795Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:53.348588 containerd[1486]: time="2024-12-13T14:10:53.348544468Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.434855072s" Dec 13 14:10:53.348588 containerd[1486]: time="2024-12-13T14:10:53.348587829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:10:53.350662 containerd[1486]: time="2024-12-13T14:10:53.350609255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:10:53.353731 containerd[1486]: time="2024-12-13T14:10:53.353410411Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:10:53.369091 containerd[1486]: time="2024-12-13T14:10:53.369031811Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\"" Dec 13 14:10:53.371011 containerd[1486]: time="2024-12-13T14:10:53.370341467Z" level=info msg="StartContainer for \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\"" Dec 13 14:10:53.405081 systemd[1]: Started cri-containerd-bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9.scope - libcontainer container bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9. Dec 13 14:10:53.439939 containerd[1486]: time="2024-12-13T14:10:53.439892319Z" level=info msg="StartContainer for \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\" returns successfully" Dec 13 14:10:53.475940 systemd[1]: cri-containerd-bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9.scope: Deactivated successfully. Dec 13 14:10:53.611647 containerd[1486]: time="2024-12-13T14:10:53.611417157Z" level=info msg="shim disconnected" id=bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9 namespace=k8s.io Dec 13 14:10:53.611647 containerd[1486]: time="2024-12-13T14:10:53.611498478Z" level=warning msg="cleaning up after shim disconnected" id=bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9 namespace=k8s.io Dec 13 14:10:53.611647 containerd[1486]: time="2024-12-13T14:10:53.611519398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:54.364293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9-rootfs.mount: Deactivated successfully. Dec 13 14:10:54.531992 containerd[1486]: time="2024-12-13T14:10:54.531880085Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:10:54.559301 containerd[1486]: time="2024-12-13T14:10:54.559201080Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\"" Dec 13 14:10:54.562296 containerd[1486]: time="2024-12-13T14:10:54.560266534Z" level=info msg="StartContainer for \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\"" Dec 13 14:10:54.597069 systemd[1]: Started cri-containerd-687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698.scope - libcontainer container 687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698. Dec 13 14:10:54.633010 containerd[1486]: time="2024-12-13T14:10:54.632780196Z" level=info msg="StartContainer for \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\" returns successfully" Dec 13 14:10:54.656741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:10:54.657199 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:10:54.657279 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:10:54.664490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:10:54.665700 systemd[1]: cri-containerd-687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698.scope: Deactivated successfully. Dec 13 14:10:54.689385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:10:54.705987 containerd[1486]: time="2024-12-13T14:10:54.705887026Z" level=info msg="shim disconnected" id=687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698 namespace=k8s.io Dec 13 14:10:54.705987 containerd[1486]: time="2024-12-13T14:10:54.705969027Z" level=warning msg="cleaning up after shim disconnected" id=687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698 namespace=k8s.io Dec 13 14:10:54.705987 containerd[1486]: time="2024-12-13T14:10:54.705981027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:55.363852 systemd[1]: run-containerd-runc-k8s.io-687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698-runc.H58xbo.mount: Deactivated successfully. Dec 13 14:10:55.364286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698-rootfs.mount: Deactivated successfully. Dec 13 14:10:55.533883 containerd[1486]: time="2024-12-13T14:10:55.533804593Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:10:55.552190 containerd[1486]: time="2024-12-13T14:10:55.552069074Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\"" Dec 13 14:10:55.554765 containerd[1486]: time="2024-12-13T14:10:55.552566120Z" level=info msg="StartContainer for \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\"" Dec 13 14:10:55.587034 systemd[1]: Started cri-containerd-176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718.scope - libcontainer container 176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718. Dec 13 14:10:55.615645 containerd[1486]: time="2024-12-13T14:10:55.615530989Z" level=info msg="StartContainer for \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\" returns successfully" Dec 13 14:10:55.630237 systemd[1]: cri-containerd-176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718.scope: Deactivated successfully. Dec 13 14:10:55.654614 containerd[1486]: time="2024-12-13T14:10:55.654359980Z" level=info msg="shim disconnected" id=176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718 namespace=k8s.io Dec 13 14:10:55.654614 containerd[1486]: time="2024-12-13T14:10:55.654425381Z" level=warning msg="cleaning up after shim disconnected" id=176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718 namespace=k8s.io Dec 13 14:10:55.654614 containerd[1486]: time="2024-12-13T14:10:55.654434101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:56.364544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718-rootfs.mount: Deactivated successfully. Dec 13 14:10:56.545915 containerd[1486]: time="2024-12-13T14:10:56.545720804Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:10:56.564912 containerd[1486]: time="2024-12-13T14:10:56.563930406Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\"" Dec 13 14:10:56.566393 containerd[1486]: time="2024-12-13T14:10:56.565336065Z" level=info msg="StartContainer for \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\"" Dec 13 14:10:56.575759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084916168.mount: Deactivated successfully. Dec 13 14:10:56.600157 systemd[1]: Started cri-containerd-0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93.scope - libcontainer container 0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93. Dec 13 14:10:56.629235 systemd[1]: cri-containerd-0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93.scope: Deactivated successfully. Dec 13 14:10:56.632966 containerd[1486]: time="2024-12-13T14:10:56.632672203Z" level=info msg="StartContainer for \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\" returns successfully" Dec 13 14:10:56.667254 containerd[1486]: time="2024-12-13T14:10:56.667154182Z" level=info msg="shim disconnected" id=0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93 namespace=k8s.io Dec 13 14:10:56.667775 containerd[1486]: time="2024-12-13T14:10:56.667580068Z" level=warning msg="cleaning up after shim disconnected" id=0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93 namespace=k8s.io Dec 13 14:10:56.667775 containerd[1486]: time="2024-12-13T14:10:56.667613548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:57.363676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93-rootfs.mount: Deactivated successfully. Dec 13 14:10:57.498898 containerd[1486]: time="2024-12-13T14:10:57.498621465Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:57.499713 containerd[1486]: time="2024-12-13T14:10:57.499676040Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138346" Dec 13 14:10:57.502910 containerd[1486]: time="2024-12-13T14:10:57.500926576Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:10:57.504823 containerd[1486]: time="2024-12-13T14:10:57.504784709Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.154131693s" Dec 13 14:10:57.504985 containerd[1486]: time="2024-12-13T14:10:57.504959631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:10:57.511759 containerd[1486]: time="2024-12-13T14:10:57.511698522Z" level=info msg="CreateContainer within sandbox \"03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:10:57.533395 containerd[1486]: time="2024-12-13T14:10:57.533308053Z" level=info msg="CreateContainer within sandbox \"03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\"" Dec 13 14:10:57.536162 containerd[1486]: time="2024-12-13T14:10:57.536116491Z" level=info msg="StartContainer for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\"" Dec 13 14:10:57.550495 containerd[1486]: time="2024-12-13T14:10:57.550449925Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:10:57.585961 containerd[1486]: time="2024-12-13T14:10:57.585833722Z" level=info msg="CreateContainer within sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\"" Dec 13 14:10:57.587048 containerd[1486]: time="2024-12-13T14:10:57.587011258Z" level=info msg="StartContainer for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\"" Dec 13 14:10:57.588172 systemd[1]: Started cri-containerd-18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d.scope - libcontainer container 18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d. Dec 13 14:10:57.616157 systemd[1]: Started cri-containerd-de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f.scope - libcontainer container de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f. Dec 13 14:10:57.633895 containerd[1486]: time="2024-12-13T14:10:57.633824089Z" level=info msg="StartContainer for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" returns successfully" Dec 13 14:10:57.658634 containerd[1486]: time="2024-12-13T14:10:57.658532943Z" level=info msg="StartContainer for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" returns successfully" Dec 13 14:10:57.860509 kubelet[2779]: I1213 14:10:57.860311 2779 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:10:57.889121 kubelet[2779]: I1213 14:10:57.889001 2779 topology_manager.go:215] "Topology Admit Handler" podUID="a784ab0e-da12-4f52-88b6-d2a90b2ada82" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6cphk" Dec 13 14:10:57.894026 kubelet[2779]: I1213 14:10:57.893777 2779 topology_manager.go:215] "Topology Admit Handler" podUID="84545779-7034-4de1-af59-d7accca0ba55" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8497p" Dec 13 14:10:57.896953 kubelet[2779]: I1213 14:10:57.895773 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a784ab0e-da12-4f52-88b6-d2a90b2ada82-config-volume\") pod \"coredns-7db6d8ff4d-6cphk\" (UID: \"a784ab0e-da12-4f52-88b6-d2a90b2ada82\") " pod="kube-system/coredns-7db6d8ff4d-6cphk" Dec 13 14:10:57.896953 kubelet[2779]: I1213 14:10:57.895839 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vx84\" (UniqueName: \"kubernetes.io/projected/a784ab0e-da12-4f52-88b6-d2a90b2ada82-kube-api-access-8vx84\") pod \"coredns-7db6d8ff4d-6cphk\" (UID: \"a784ab0e-da12-4f52-88b6-d2a90b2ada82\") " pod="kube-system/coredns-7db6d8ff4d-6cphk" Dec 13 14:10:57.903330 systemd[1]: Created slice kubepods-burstable-poda784ab0e_da12_4f52_88b6_d2a90b2ada82.slice - libcontainer container kubepods-burstable-poda784ab0e_da12_4f52_88b6_d2a90b2ada82.slice. Dec 13 14:10:57.915031 systemd[1]: Created slice kubepods-burstable-pod84545779_7034_4de1_af59_d7accca0ba55.slice - libcontainer container kubepods-burstable-pod84545779_7034_4de1_af59_d7accca0ba55.slice. Dec 13 14:10:57.996359 kubelet[2779]: I1213 14:10:57.996276 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84545779-7034-4de1-af59-d7accca0ba55-config-volume\") pod \"coredns-7db6d8ff4d-8497p\" (UID: \"84545779-7034-4de1-af59-d7accca0ba55\") " pod="kube-system/coredns-7db6d8ff4d-8497p" Dec 13 14:10:57.996359 kubelet[2779]: I1213 14:10:57.996331 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4t4v\" (UniqueName: \"kubernetes.io/projected/84545779-7034-4de1-af59-d7accca0ba55-kube-api-access-d4t4v\") pod \"coredns-7db6d8ff4d-8497p\" (UID: \"84545779-7034-4de1-af59-d7accca0ba55\") " pod="kube-system/coredns-7db6d8ff4d-8497p" Dec 13 14:10:58.211913 containerd[1486]: time="2024-12-13T14:10:58.211486835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cphk,Uid:a784ab0e-da12-4f52-88b6-d2a90b2ada82,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:58.222606 containerd[1486]: time="2024-12-13T14:10:58.222562946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8497p,Uid:84545779-7034-4de1-af59-d7accca0ba55,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:58.717619 kubelet[2779]: I1213 14:10:58.717561 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-q57wl" podStartSLOduration=2.249920948 podStartE2EDuration="11.717542981s" podCreationTimestamp="2024-12-13 14:10:47 +0000 UTC" firstStartedPulling="2024-12-13 14:10:48.039309224 +0000 UTC m=+16.733846576" lastFinishedPulling="2024-12-13 14:10:57.506931257 +0000 UTC m=+26.201468609" observedRunningTime="2024-12-13 14:10:58.669954412 +0000 UTC m=+27.364491724" watchObservedRunningTime="2024-12-13 14:10:58.717542981 +0000 UTC m=+27.412080333" Dec 13 14:10:58.717945 kubelet[2779]: I1213 14:10:58.717914 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zmgtb" podStartSLOduration=7.281516134 podStartE2EDuration="12.717907306s" podCreationTimestamp="2024-12-13 14:10:46 +0000 UTC" firstStartedPulling="2024-12-13 14:10:47.91323443 +0000 UTC m=+16.607771782" lastFinishedPulling="2024-12-13 14:10:53.349625602 +0000 UTC m=+22.044162954" observedRunningTime="2024-12-13 14:10:58.715469793 +0000 UTC m=+27.410007145" watchObservedRunningTime="2024-12-13 14:10:58.717907306 +0000 UTC m=+27.412444698" Dec 13 14:11:02.379289 systemd-networkd[1376]: cilium_host: Link UP Dec 13 14:11:02.380921 systemd-networkd[1376]: cilium_net: Link UP Dec 13 14:11:02.383621 systemd-networkd[1376]: cilium_net: Gained carrier Dec 13 14:11:02.383994 systemd-networkd[1376]: cilium_host: Gained carrier Dec 13 14:11:02.384271 systemd-networkd[1376]: cilium_net: Gained IPv6LL Dec 13 14:11:02.384413 systemd-networkd[1376]: cilium_host: Gained IPv6LL Dec 13 14:11:02.503362 systemd-networkd[1376]: cilium_vxlan: Link UP Dec 13 14:11:02.503524 systemd-networkd[1376]: cilium_vxlan: Gained carrier Dec 13 14:11:02.824957 kernel: NET: Registered PF_ALG protocol family Dec 13 14:11:03.532126 systemd-networkd[1376]: lxc_health: Link UP Dec 13 14:11:03.546375 systemd-networkd[1376]: lxc_health: Gained carrier Dec 13 14:11:03.802947 systemd-networkd[1376]: lxcecd45b3e97f9: Link UP Dec 13 14:11:03.808185 kernel: eth0: renamed from tmp0a7ec Dec 13 14:11:03.816429 systemd-networkd[1376]: lxcecd45b3e97f9: Gained carrier Dec 13 14:11:03.817035 systemd-networkd[1376]: lxc24c85e2f27ae: Link UP Dec 13 14:11:03.820988 kernel: eth0: renamed from tmp2e932 Dec 13 14:11:03.832666 systemd-networkd[1376]: lxc24c85e2f27ae: Gained carrier Dec 13 14:11:04.329004 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Dec 13 14:11:04.777059 systemd-networkd[1376]: lxc_health: Gained IPv6LL Dec 13 14:11:05.034037 systemd-networkd[1376]: lxc24c85e2f27ae: Gained IPv6LL Dec 13 14:11:05.545969 systemd-networkd[1376]: lxcecd45b3e97f9: Gained IPv6LL Dec 13 14:11:07.556986 containerd[1486]: time="2024-12-13T14:11:07.554889633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:07.556986 containerd[1486]: time="2024-12-13T14:11:07.554956074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:07.556986 containerd[1486]: time="2024-12-13T14:11:07.554976874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:07.556986 containerd[1486]: time="2024-12-13T14:11:07.555062075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:07.593081 systemd[1]: Started cri-containerd-0a7ec3d40a23f863a0af861639f72e4648ac3fea16f23bee9df9a74e3bfe9081.scope - libcontainer container 0a7ec3d40a23f863a0af861639f72e4648ac3fea16f23bee9df9a74e3bfe9081. Dec 13 14:11:07.595057 containerd[1486]: time="2024-12-13T14:11:07.591791341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:07.595057 containerd[1486]: time="2024-12-13T14:11:07.592994359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:07.595057 containerd[1486]: time="2024-12-13T14:11:07.593016599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:07.595057 containerd[1486]: time="2024-12-13T14:11:07.593104560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:07.630032 systemd[1]: Started cri-containerd-2e93248c710cbec790d36bcd52a564527575ba3c285ce3b2638d1947c4d5e589.scope - libcontainer container 2e93248c710cbec790d36bcd52a564527575ba3c285ce3b2638d1947c4d5e589. Dec 13 14:11:07.657389 containerd[1486]: time="2024-12-13T14:11:07.657354634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cphk,Uid:a784ab0e-da12-4f52-88b6-d2a90b2ada82,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7ec3d40a23f863a0af861639f72e4648ac3fea16f23bee9df9a74e3bfe9081\"" Dec 13 14:11:07.664418 containerd[1486]: time="2024-12-13T14:11:07.664357778Z" level=info msg="CreateContainer within sandbox \"0a7ec3d40a23f863a0af861639f72e4648ac3fea16f23bee9df9a74e3bfe9081\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:11:07.685318 containerd[1486]: time="2024-12-13T14:11:07.685265329Z" level=info msg="CreateContainer within sandbox \"0a7ec3d40a23f863a0af861639f72e4648ac3fea16f23bee9df9a74e3bfe9081\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97294f24b81935683813069189c05e2fdb73ebd0848aa533411bb5b1ae637f9c\"" Dec 13 14:11:07.686127 containerd[1486]: time="2024-12-13T14:11:07.686087141Z" level=info msg="StartContainer for \"97294f24b81935683813069189c05e2fdb73ebd0848aa533411bb5b1ae637f9c\"" Dec 13 14:11:07.689966 containerd[1486]: time="2024-12-13T14:11:07.689926078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8497p,Uid:84545779-7034-4de1-af59-d7accca0ba55,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e93248c710cbec790d36bcd52a564527575ba3c285ce3b2638d1947c4d5e589\"" Dec 13 14:11:07.695510 containerd[1486]: time="2024-12-13T14:11:07.695460280Z" level=info msg="CreateContainer within sandbox \"2e93248c710cbec790d36bcd52a564527575ba3c285ce3b2638d1947c4d5e589\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:11:07.719309 containerd[1486]: time="2024-12-13T14:11:07.719147152Z" level=info msg="CreateContainer within sandbox \"2e93248c710cbec790d36bcd52a564527575ba3c285ce3b2638d1947c4d5e589\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cf46f71b9ef9df6a312dc37c762dc9a7f67d78004d60e9ac46c6fbb84bf6ca9\"" Dec 13 14:11:07.722612 containerd[1486]: time="2024-12-13T14:11:07.722106036Z" level=info msg="StartContainer for \"2cf46f71b9ef9df6a312dc37c762dc9a7f67d78004d60e9ac46c6fbb84bf6ca9\"" Dec 13 14:11:07.725977 systemd[1]: Started cri-containerd-97294f24b81935683813069189c05e2fdb73ebd0848aa533411bb5b1ae637f9c.scope - libcontainer container 97294f24b81935683813069189c05e2fdb73ebd0848aa533411bb5b1ae637f9c. Dec 13 14:11:07.754059 systemd[1]: Started cri-containerd-2cf46f71b9ef9df6a312dc37c762dc9a7f67d78004d60e9ac46c6fbb84bf6ca9.scope - libcontainer container 2cf46f71b9ef9df6a312dc37c762dc9a7f67d78004d60e9ac46c6fbb84bf6ca9. Dec 13 14:11:07.795237 containerd[1486]: time="2024-12-13T14:11:07.794054264Z" level=info msg="StartContainer for \"97294f24b81935683813069189c05e2fdb73ebd0848aa533411bb5b1ae637f9c\" returns successfully" Dec 13 14:11:07.799784 containerd[1486]: time="2024-12-13T14:11:07.798051164Z" level=info msg="StartContainer for \"2cf46f71b9ef9df6a312dc37c762dc9a7f67d78004d60e9ac46c6fbb84bf6ca9\" returns successfully" Dec 13 14:11:08.606428 kubelet[2779]: I1213 14:11:08.605932 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6cphk" podStartSLOduration=21.605430422 podStartE2EDuration="21.605430422s" podCreationTimestamp="2024-12-13 14:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:11:08.603541033 +0000 UTC m=+37.298078425" watchObservedRunningTime="2024-12-13 14:11:08.605430422 +0000 UTC m=+37.299967774" Dec 13 14:11:08.637060 kubelet[2779]: I1213 14:11:08.635914 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8497p" podStartSLOduration=21.635899557 podStartE2EDuration="21.635899557s" podCreationTimestamp="2024-12-13 14:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:11:08.635255988 +0000 UTC m=+37.329793340" watchObservedRunningTime="2024-12-13 14:11:08.635899557 +0000 UTC m=+37.330436949" Dec 13 14:15:23.997680 update_engine[1466]: I20241213 14:15:23.997365 1466 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 14:15:23.997680 update_engine[1466]: I20241213 14:15:23.997466 1466 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 14:15:24.000149 update_engine[1466]: I20241213 14:15:23.998498 1466 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 14:15:24.001597 update_engine[1466]: I20241213 14:15:24.001545 1466 omaha_request_params.cc:62] Current group set to stable Dec 13 14:15:24.001749 update_engine[1466]: I20241213 14:15:24.001667 1466 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 14:15:24.001749 update_engine[1466]: I20241213 14:15:24.001683 1466 update_attempter.cc:643] Scheduling an action processor start. Dec 13 14:15:24.001749 update_engine[1466]: I20241213 14:15:24.001714 1466 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:15:24.001833 update_engine[1466]: I20241213 14:15:24.001750 1466 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 14:15:24.001833 update_engine[1466]: I20241213 14:15:24.001819 1466 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 14:15:24.001910 update_engine[1466]: I20241213 14:15:24.001832 1466 omaha_request_action.cc:272] Request: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: Dec 13 14:15:24.001910 update_engine[1466]: I20241213 14:15:24.001842 1466 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:15:24.003392 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 14:15:24.004123 update_engine[1466]: I20241213 14:15:24.004096 1466 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:15:24.004490 update_engine[1466]: I20241213 14:15:24.004451 1466 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:15:24.005495 update_engine[1466]: E20241213 14:15:24.005461 1466 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:15:24.005556 update_engine[1466]: I20241213 14:15:24.005539 1466 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 14:15:24.437322 systemd[1]: Started sshd@7-168.119.51.76:22-139.178.68.195:51912.service - OpenSSH per-connection server daemon (139.178.68.195:51912). Dec 13 14:15:25.420433 sshd[4178]: Accepted publickey for core from 139.178.68.195 port 51912 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:25.423475 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:25.428144 systemd-logind[1463]: New session 8 of user core. Dec 13 14:15:25.436086 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 14:15:26.200771 sshd[4178]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:26.206603 systemd[1]: sshd@7-168.119.51.76:22-139.178.68.195:51912.service: Deactivated successfully. Dec 13 14:15:26.210518 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:15:26.212123 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:15:26.213427 systemd-logind[1463]: Removed session 8. Dec 13 14:15:31.377343 systemd[1]: Started sshd@8-168.119.51.76:22-139.178.68.195:57060.service - OpenSSH per-connection server daemon (139.178.68.195:57060). Dec 13 14:15:32.355971 sshd[4191]: Accepted publickey for core from 139.178.68.195 port 57060 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:32.358270 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:32.365569 systemd-logind[1463]: New session 9 of user core. Dec 13 14:15:32.373086 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 14:15:33.111967 sshd[4191]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:33.117403 systemd[1]: sshd@8-168.119.51.76:22-139.178.68.195:57060.service: Deactivated successfully. Dec 13 14:15:33.120161 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:15:33.122845 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:15:33.125497 systemd-logind[1463]: Removed session 9. Dec 13 14:15:33.914976 update_engine[1466]: I20241213 14:15:33.914366 1466 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:15:33.914976 update_engine[1466]: I20241213 14:15:33.914664 1466 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:15:33.914976 update_engine[1466]: I20241213 14:15:33.914956 1466 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:15:33.915613 update_engine[1466]: E20241213 14:15:33.915557 1466 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:15:33.915715 update_engine[1466]: I20241213 14:15:33.915668 1466 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 14:15:38.292237 systemd[1]: Started sshd@9-168.119.51.76:22-139.178.68.195:57588.service - OpenSSH per-connection server daemon (139.178.68.195:57588). Dec 13 14:15:39.272931 sshd[4206]: Accepted publickey for core from 139.178.68.195 port 57588 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:39.274899 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:39.283031 systemd-logind[1463]: New session 10 of user core. Dec 13 14:15:39.296206 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 14:15:40.032012 sshd[4206]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:40.038242 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:15:40.038461 systemd[1]: sshd@9-168.119.51.76:22-139.178.68.195:57588.service: Deactivated successfully. Dec 13 14:15:40.041091 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:15:40.043060 systemd-logind[1463]: Removed session 10. Dec 13 14:15:40.209390 systemd[1]: Started sshd@10-168.119.51.76:22-139.178.68.195:57590.service - OpenSSH per-connection server daemon (139.178.68.195:57590). Dec 13 14:15:41.196092 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 57590 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:41.198815 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:41.205047 systemd-logind[1463]: New session 11 of user core. Dec 13 14:15:41.210045 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 14:15:41.988791 sshd[4219]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:41.994190 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:15:41.994475 systemd[1]: sshd@10-168.119.51.76:22-139.178.68.195:57590.service: Deactivated successfully. Dec 13 14:15:41.997819 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:15:42.001508 systemd-logind[1463]: Removed session 11. Dec 13 14:15:42.163285 systemd[1]: Started sshd@11-168.119.51.76:22-139.178.68.195:57602.service - OpenSSH per-connection server daemon (139.178.68.195:57602). Dec 13 14:15:43.145464 sshd[4230]: Accepted publickey for core from 139.178.68.195 port 57602 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:43.148187 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:43.156390 systemd-logind[1463]: New session 12 of user core. Dec 13 14:15:43.161042 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 14:15:43.903114 sshd[4230]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:43.908911 systemd[1]: sshd@11-168.119.51.76:22-139.178.68.195:57602.service: Deactivated successfully. Dec 13 14:15:43.912481 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:15:43.913742 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:15:43.914006 update_engine[1466]: I20241213 14:15:43.913763 1466 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:15:43.914729 update_engine[1466]: I20241213 14:15:43.914436 1466 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:15:43.914729 update_engine[1466]: I20241213 14:15:43.914664 1466 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:15:43.915561 update_engine[1466]: E20241213 14:15:43.915460 1466 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:15:43.915561 update_engine[1466]: I20241213 14:15:43.915531 1466 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 14:15:43.916157 systemd-logind[1463]: Removed session 12. Dec 13 14:15:49.080536 systemd[1]: Started sshd@12-168.119.51.76:22-139.178.68.195:50902.service - OpenSSH per-connection server daemon (139.178.68.195:50902). Dec 13 14:15:50.072319 sshd[4245]: Accepted publickey for core from 139.178.68.195 port 50902 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:50.074528 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:50.081109 systemd-logind[1463]: New session 13 of user core. Dec 13 14:15:50.090059 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 14:15:50.832338 sshd[4245]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:50.836698 systemd[1]: sshd@12-168.119.51.76:22-139.178.68.195:50902.service: Deactivated successfully. Dec 13 14:15:50.840636 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:15:50.843048 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:15:50.844565 systemd-logind[1463]: Removed session 13. Dec 13 14:15:53.916426 update_engine[1466]: I20241213 14:15:53.916281 1466 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:15:53.916943 update_engine[1466]: I20241213 14:15:53.916678 1466 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:15:53.917103 update_engine[1466]: I20241213 14:15:53.917037 1466 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:15:53.917959 update_engine[1466]: E20241213 14:15:53.917897 1466 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:15:53.918081 update_engine[1466]: I20241213 14:15:53.917976 1466 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:15:53.918081 update_engine[1466]: I20241213 14:15:53.917991 1466 omaha_request_action.cc:617] Omaha request response: Dec 13 14:15:53.918163 update_engine[1466]: E20241213 14:15:53.918105 1466 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 14:15:53.918163 update_engine[1466]: I20241213 14:15:53.918131 1466 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 14:15:53.918163 update_engine[1466]: I20241213 14:15:53.918141 1466 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:15:53.918163 update_engine[1466]: I20241213 14:15:53.918150 1466 update_attempter.cc:306] Processing Done. Dec 13 14:15:53.918325 update_engine[1466]: E20241213 14:15:53.918169 1466 update_attempter.cc:619] Update failed. Dec 13 14:15:53.918325 update_engine[1466]: I20241213 14:15:53.918179 1466 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 14:15:53.918325 update_engine[1466]: I20241213 14:15:53.918186 1466 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 14:15:53.918325 update_engine[1466]: I20241213 14:15:53.918195 1466 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 14:15:53.918325 update_engine[1466]: I20241213 14:15:53.918283 1466 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:15:53.918325 update_engine[1466]: I20241213 14:15:53.918314 1466 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 14:15:53.918325 update_engine[1466]: I20241213 14:15:53.918325 1466 omaha_request_action.cc:272] Request: Dec 13 14:15:53.918325 update_engine[1466]: Dec 13 14:15:53.918325 update_engine[1466]: Dec 13 14:15:53.918325 update_engine[1466]: Dec 13 14:15:53.918325 update_engine[1466]: Dec 13 14:15:53.918325 update_engine[1466]: Dec 13 14:15:53.918325 update_engine[1466]: Dec 13 14:15:53.918760 update_engine[1466]: I20241213 14:15:53.918334 1466 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:15:53.918760 update_engine[1466]: I20241213 14:15:53.918516 1466 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:15:53.918760 update_engine[1466]: I20241213 14:15:53.918704 1466 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:15:53.919181 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 14:15:53.919674 update_engine[1466]: E20241213 14:15:53.919628 1466 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:15:53.919725 update_engine[1466]: I20241213 14:15:53.919693 1466 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:15:53.919725 update_engine[1466]: I20241213 14:15:53.919705 1466 omaha_request_action.cc:617] Omaha request response: Dec 13 14:15:53.919725 update_engine[1466]: I20241213 14:15:53.919715 1466 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:15:53.919839 update_engine[1466]: I20241213 14:15:53.919724 1466 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:15:53.919839 update_engine[1466]: I20241213 14:15:53.919732 1466 update_attempter.cc:306] Processing Done. Dec 13 14:15:53.919839 update_engine[1466]: I20241213 14:15:53.919742 1466 update_attempter.cc:310] Error event sent. Dec 13 14:15:53.919839 update_engine[1466]: I20241213 14:15:53.919754 1466 update_check_scheduler.cc:74] Next update check in 43m4s Dec 13 14:15:53.920153 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 14:15:56.010193 systemd[1]: Started sshd@13-168.119.51.76:22-139.178.68.195:50912.service - OpenSSH per-connection server daemon (139.178.68.195:50912). Dec 13 14:15:56.987994 sshd[4257]: Accepted publickey for core from 139.178.68.195 port 50912 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:56.990209 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:56.995409 systemd-logind[1463]: New session 14 of user core. Dec 13 14:15:57.003144 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 14:15:57.747281 sshd[4257]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:57.752497 systemd[1]: sshd@13-168.119.51.76:22-139.178.68.195:50912.service: Deactivated successfully. Dec 13 14:15:57.755714 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:15:57.758441 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:15:57.759801 systemd-logind[1463]: Removed session 14. Dec 13 14:15:57.925302 systemd[1]: Started sshd@14-168.119.51.76:22-139.178.68.195:43790.service - OpenSSH per-connection server daemon (139.178.68.195:43790). Dec 13 14:15:58.919795 sshd[4269]: Accepted publickey for core from 139.178.68.195 port 43790 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:15:58.922330 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:15:58.928348 systemd-logind[1463]: New session 15 of user core. Dec 13 14:15:58.936093 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 14:15:59.746224 sshd[4269]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:59.751635 systemd[1]: sshd@14-168.119.51.76:22-139.178.68.195:43790.service: Deactivated successfully. Dec 13 14:15:59.755503 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:15:59.757603 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:15:59.759800 systemd-logind[1463]: Removed session 15. Dec 13 14:15:59.915798 systemd[1]: Started sshd@15-168.119.51.76:22-139.178.68.195:43798.service - OpenSSH per-connection server daemon (139.178.68.195:43798). Dec 13 14:16:00.913341 sshd[4280]: Accepted publickey for core from 139.178.68.195 port 43798 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:00.915419 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:00.922077 systemd-logind[1463]: New session 16 of user core. Dec 13 14:16:00.927048 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 14:16:03.316686 sshd[4280]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:03.322413 systemd[1]: sshd@15-168.119.51.76:22-139.178.68.195:43798.service: Deactivated successfully. Dec 13 14:16:03.327474 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:16:03.330711 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:16:03.332910 systemd-logind[1463]: Removed session 16. Dec 13 14:16:03.501480 systemd[1]: Started sshd@16-168.119.51.76:22-139.178.68.195:43804.service - OpenSSH per-connection server daemon (139.178.68.195:43804). Dec 13 14:16:04.500664 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 43804 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:04.503180 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:04.511285 systemd-logind[1463]: New session 17 of user core. Dec 13 14:16:04.515516 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 14:16:05.377273 sshd[4299]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:05.382203 systemd[1]: sshd@16-168.119.51.76:22-139.178.68.195:43804.service: Deactivated successfully. Dec 13 14:16:05.385275 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:16:05.389092 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:16:05.390154 systemd-logind[1463]: Removed session 17. Dec 13 14:16:05.548242 systemd[1]: Started sshd@17-168.119.51.76:22-139.178.68.195:43810.service - OpenSSH per-connection server daemon (139.178.68.195:43810). Dec 13 14:16:06.529032 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 43810 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:06.531815 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:06.538701 systemd-logind[1463]: New session 18 of user core. Dec 13 14:16:06.544160 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 14:16:07.296254 sshd[4309]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:07.301149 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:16:07.302343 systemd[1]: sshd@17-168.119.51.76:22-139.178.68.195:43810.service: Deactivated successfully. Dec 13 14:16:07.306297 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:16:07.308040 systemd-logind[1463]: Removed session 18. Dec 13 14:16:12.467157 systemd[1]: Started sshd@18-168.119.51.76:22-139.178.68.195:33970.service - OpenSSH per-connection server daemon (139.178.68.195:33970). Dec 13 14:16:13.467369 sshd[4324]: Accepted publickey for core from 139.178.68.195 port 33970 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:13.469151 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:13.475747 systemd-logind[1463]: New session 19 of user core. Dec 13 14:16:13.482072 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 14:16:14.221789 sshd[4324]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:14.227693 systemd[1]: sshd@18-168.119.51.76:22-139.178.68.195:33970.service: Deactivated successfully. Dec 13 14:16:14.229722 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:16:14.231022 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:16:14.232350 systemd-logind[1463]: Removed session 19. Dec 13 14:16:19.394237 systemd[1]: Started sshd@19-168.119.51.76:22-139.178.68.195:43678.service - OpenSSH per-connection server daemon (139.178.68.195:43678). Dec 13 14:16:20.377287 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 43678 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:20.379710 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:20.386174 systemd-logind[1463]: New session 20 of user core. Dec 13 14:16:20.392070 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 14:16:21.131412 sshd[4340]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:21.136258 systemd[1]: sshd@19-168.119.51.76:22-139.178.68.195:43678.service: Deactivated successfully. Dec 13 14:16:21.138494 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:16:21.140945 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:16:21.143152 systemd-logind[1463]: Removed session 20. Dec 13 14:16:26.309296 systemd[1]: Started sshd@20-168.119.51.76:22-139.178.68.195:47552.service - OpenSSH per-connection server daemon (139.178.68.195:47552). Dec 13 14:16:27.305616 sshd[4353]: Accepted publickey for core from 139.178.68.195 port 47552 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:27.308197 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:27.314567 systemd-logind[1463]: New session 21 of user core. Dec 13 14:16:27.324952 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 14:16:28.056986 sshd[4353]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:28.062754 systemd[1]: sshd@20-168.119.51.76:22-139.178.68.195:47552.service: Deactivated successfully. Dec 13 14:16:28.064992 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:16:28.066951 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:16:28.068717 systemd-logind[1463]: Removed session 21. Dec 13 14:16:28.234212 systemd[1]: Started sshd@21-168.119.51.76:22-139.178.68.195:47558.service - OpenSSH per-connection server daemon (139.178.68.195:47558). Dec 13 14:16:29.225127 sshd[4366]: Accepted publickey for core from 139.178.68.195 port 47558 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:29.227190 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:29.234044 systemd-logind[1463]: New session 22 of user core. Dec 13 14:16:29.240308 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 14:16:31.684586 containerd[1486]: time="2024-12-13T14:16:31.684094547Z" level=info msg="StopContainer for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" with timeout 30 (s)" Dec 13 14:16:31.689468 containerd[1486]: time="2024-12-13T14:16:31.686232865Z" level=info msg="Stop container \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" with signal terminated" Dec 13 14:16:31.700624 systemd[1]: cri-containerd-18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d.scope: Deactivated successfully. Dec 13 14:16:31.711496 containerd[1486]: time="2024-12-13T14:16:31.711441372Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:16:31.724151 containerd[1486]: time="2024-12-13T14:16:31.724090205Z" level=info msg="StopContainer for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" with timeout 2 (s)" Dec 13 14:16:31.724410 containerd[1486]: time="2024-12-13T14:16:31.724376599Z" level=info msg="Stop container \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" with signal terminated" Dec 13 14:16:31.726028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d-rootfs.mount: Deactivated successfully. Dec 13 14:16:31.736549 systemd-networkd[1376]: lxc_health: Link DOWN Dec 13 14:16:31.736562 systemd-networkd[1376]: lxc_health: Lost carrier Dec 13 14:16:31.740355 containerd[1486]: time="2024-12-13T14:16:31.740216450Z" level=info msg="shim disconnected" id=18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d namespace=k8s.io Dec 13 14:16:31.740355 containerd[1486]: time="2024-12-13T14:16:31.740272929Z" level=warning msg="cleaning up after shim disconnected" id=18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d namespace=k8s.io Dec 13 14:16:31.740355 containerd[1486]: time="2024-12-13T14:16:31.740281649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:31.756799 systemd[1]: cri-containerd-de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f.scope: Deactivated successfully. Dec 13 14:16:31.759133 systemd[1]: cri-containerd-de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f.scope: Consumed 7.755s CPU time. Dec 13 14:16:31.766344 containerd[1486]: time="2024-12-13T14:16:31.766300100Z" level=info msg="StopContainer for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" returns successfully" Dec 13 14:16:31.766926 containerd[1486]: time="2024-12-13T14:16:31.766899168Z" level=info msg="StopPodSandbox for \"03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522\"" Dec 13 14:16:31.767091 containerd[1486]: time="2024-12-13T14:16:31.767018566Z" level=info msg="Container to stop \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:31.769683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522-shm.mount: Deactivated successfully. Dec 13 14:16:31.778282 systemd[1]: cri-containerd-03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522.scope: Deactivated successfully. Dec 13 14:16:31.798226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f-rootfs.mount: Deactivated successfully. Dec 13 14:16:31.808784 containerd[1486]: time="2024-12-13T14:16:31.808606913Z" level=info msg="shim disconnected" id=de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f namespace=k8s.io Dec 13 14:16:31.808784 containerd[1486]: time="2024-12-13T14:16:31.808767470Z" level=warning msg="cleaning up after shim disconnected" id=de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f namespace=k8s.io Dec 13 14:16:31.808784 containerd[1486]: time="2024-12-13T14:16:31.808778030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:31.816265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522-rootfs.mount: Deactivated successfully. Dec 13 14:16:31.818170 containerd[1486]: time="2024-12-13T14:16:31.818026609Z" level=info msg="shim disconnected" id=03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522 namespace=k8s.io Dec 13 14:16:31.818170 containerd[1486]: time="2024-12-13T14:16:31.818079448Z" level=warning msg="cleaning up after shim disconnected" id=03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522 namespace=k8s.io Dec 13 14:16:31.818170 containerd[1486]: time="2024-12-13T14:16:31.818100288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:31.830954 containerd[1486]: time="2024-12-13T14:16:31.830751400Z" level=info msg="StopContainer for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" returns successfully" Dec 13 14:16:31.831383 containerd[1486]: time="2024-12-13T14:16:31.831324389Z" level=info msg="StopPodSandbox for \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\"" Dec 13 14:16:31.831436 containerd[1486]: time="2024-12-13T14:16:31.831361948Z" level=info msg="Container to stop \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:31.831436 containerd[1486]: time="2024-12-13T14:16:31.831393588Z" level=info msg="Container to stop \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:31.831436 containerd[1486]: time="2024-12-13T14:16:31.831402788Z" level=info msg="Container to stop \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:31.831436 containerd[1486]: time="2024-12-13T14:16:31.831413187Z" level=info msg="Container to stop \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:31.831436 containerd[1486]: time="2024-12-13T14:16:31.831422547Z" level=info msg="Container to stop \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:31.832241 containerd[1486]: time="2024-12-13T14:16:31.832213172Z" level=info msg="TearDown network for sandbox \"03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522\" successfully" Dec 13 14:16:31.832241 containerd[1486]: time="2024-12-13T14:16:31.832232691Z" level=info msg="StopPodSandbox for \"03518032589f30ba6739af93c9d7fcd4a280ecb30b226db7dbccc7ffc8fb7522\" returns successfully" Dec 13 14:16:31.844412 systemd[1]: cri-containerd-c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c.scope: Deactivated successfully. Dec 13 14:16:31.879641 containerd[1486]: time="2024-12-13T14:16:31.879314291Z" level=info msg="shim disconnected" id=c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c namespace=k8s.io Dec 13 14:16:31.879641 containerd[1486]: time="2024-12-13T14:16:31.879391330Z" level=warning msg="cleaning up after shim disconnected" id=c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c namespace=k8s.io Dec 13 14:16:31.879641 containerd[1486]: time="2024-12-13T14:16:31.879411449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:31.893110 containerd[1486]: time="2024-12-13T14:16:31.893053623Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:16:31.894591 containerd[1486]: time="2024-12-13T14:16:31.894395757Z" level=info msg="TearDown network for sandbox \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" successfully" Dec 13 14:16:31.894591 containerd[1486]: time="2024-12-13T14:16:31.894427676Z" level=info msg="StopPodSandbox for \"c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c\" returns successfully" Dec 13 14:16:31.961890 kubelet[2779]: I1213 14:16:31.959976 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-clustermesh-secrets\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.961890 kubelet[2779]: I1213 14:16:31.960064 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-bpf-maps\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.961890 kubelet[2779]: I1213 14:16:31.960121 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tgdl\" (UniqueName: \"kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-kube-api-access-2tgdl\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.961890 kubelet[2779]: I1213 14:16:31.960163 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-xtables-lock\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.961890 kubelet[2779]: I1213 14:16:31.960199 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-etc-cni-netd\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.961890 kubelet[2779]: I1213 14:16:31.960233 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hostproc\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.962849 kubelet[2779]: I1213 14:16:31.960271 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hubble-tls\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.962849 kubelet[2779]: I1213 14:16:31.960302 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cni-path\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.962849 kubelet[2779]: I1213 14:16:31.960343 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-config-path\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.962849 kubelet[2779]: I1213 14:16:31.960377 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-cgroup\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.962849 kubelet[2779]: I1213 14:16:31.960413 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-net\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.962849 kubelet[2779]: I1213 14:16:31.960444 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-lib-modules\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.963270 kubelet[2779]: I1213 14:16:31.960483 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6z6v\" (UniqueName: \"kubernetes.io/projected/cfd11f4c-33a7-49d9-973a-5796bd640759-kube-api-access-j6z6v\") pod \"cfd11f4c-33a7-49d9-973a-5796bd640759\" (UID: \"cfd11f4c-33a7-49d9-973a-5796bd640759\") " Dec 13 14:16:31.963270 kubelet[2779]: I1213 14:16:31.960516 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-run\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.963270 kubelet[2779]: I1213 14:16:31.960548 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-kernel\") pod \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\" (UID: \"1a43c596-cf6e-4ef0-aad5-55fc345d4d33\") " Dec 13 14:16:31.963270 kubelet[2779]: I1213 14:16:31.960591 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfd11f4c-33a7-49d9-973a-5796bd640759-cilium-config-path\") pod \"cfd11f4c-33a7-49d9-973a-5796bd640759\" (UID: \"cfd11f4c-33a7-49d9-973a-5796bd640759\") " Dec 13 14:16:31.966107 kubelet[2779]: I1213 14:16:31.966043 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:16:31.966240 kubelet[2779]: I1213 14:16:31.966211 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.966308 kubelet[2779]: I1213 14:16:31.966255 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.970193 kubelet[2779]: I1213 14:16:31.970120 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.970193 kubelet[2779]: I1213 14:16:31.970194 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.970520 kubelet[2779]: I1213 14:16:31.970477 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.970688 kubelet[2779]: I1213 14:16:31.970660 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.970813 kubelet[2779]: I1213 14:16:31.970664 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd11f4c-33a7-49d9-973a-5796bd640759-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cfd11f4c-33a7-49d9-973a-5796bd640759" (UID: "cfd11f4c-33a7-49d9-973a-5796bd640759"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:31.970976 kubelet[2779]: I1213 14:16:31.970692 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.975779 kubelet[2779]: I1213 14:16:31.971119 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.975779 kubelet[2779]: I1213 14:16:31.973712 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.975779 kubelet[2779]: I1213 14:16:31.973757 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:31.975779 kubelet[2779]: I1213 14:16:31.975706 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:31.975980 kubelet[2779]: I1213 14:16:31.975847 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-kube-api-access-2tgdl" (OuterVolumeSpecName: "kube-api-access-2tgdl") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "kube-api-access-2tgdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:31.976161 kubelet[2779]: I1213 14:16:31.976137 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd11f4c-33a7-49d9-973a-5796bd640759-kube-api-access-j6z6v" (OuterVolumeSpecName: "kube-api-access-j6z6v") pod "cfd11f4c-33a7-49d9-973a-5796bd640759" (UID: "cfd11f4c-33a7-49d9-973a-5796bd640759"). InnerVolumeSpecName "kube-api-access-j6z6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:31.978021 kubelet[2779]: I1213 14:16:31.977986 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a43c596-cf6e-4ef0-aad5-55fc345d4d33" (UID: "1a43c596-cf6e-4ef0-aad5-55fc345d4d33"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:32.061549 kubelet[2779]: I1213 14:16:32.061476 2779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-net\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.061549 kubelet[2779]: I1213 14:16:32.061529 2779 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-lib-modules\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.061549 kubelet[2779]: I1213 14:16:32.061545 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-run\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.061549 kubelet[2779]: I1213 14:16:32.061559 2779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-host-proc-sys-kernel\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.061549 kubelet[2779]: I1213 14:16:32.061574 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfd11f4c-33a7-49d9-973a-5796bd640759-cilium-config-path\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061587 2779 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j6z6v\" (UniqueName: \"kubernetes.io/projected/cfd11f4c-33a7-49d9-973a-5796bd640759-kube-api-access-j6z6v\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061599 2779 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-clustermesh-secrets\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061611 2779 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-bpf-maps\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061624 2779 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2tgdl\" (UniqueName: \"kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-kube-api-access-2tgdl\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061639 2779 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hostproc\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061651 2779 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-hubble-tls\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061662 2779 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-xtables-lock\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062120 kubelet[2779]: I1213 14:16:32.061674 2779 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-etc-cni-netd\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062604 kubelet[2779]: I1213 14:16:32.061684 2779 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cni-path\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062604 kubelet[2779]: I1213 14:16:32.061695 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-config-path\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.062604 kubelet[2779]: I1213 14:16:32.061707 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a43c596-cf6e-4ef0-aad5-55fc345d4d33-cilium-cgroup\") on node \"ci-4081-2-1-a-7dfc9bce8d\" DevicePath \"\"" Dec 13 14:16:32.386052 kubelet[2779]: I1213 14:16:32.385173 2779 scope.go:117] "RemoveContainer" containerID="de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f" Dec 13 14:16:32.390805 containerd[1486]: time="2024-12-13T14:16:32.390746558Z" level=info msg="RemoveContainer for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\"" Dec 13 14:16:32.395968 systemd[1]: Removed slice kubepods-burstable-pod1a43c596_cf6e_4ef0_aad5_55fc345d4d33.slice - libcontainer container kubepods-burstable-pod1a43c596_cf6e_4ef0_aad5_55fc345d4d33.slice. Dec 13 14:16:32.397041 systemd[1]: kubepods-burstable-pod1a43c596_cf6e_4ef0_aad5_55fc345d4d33.slice: Consumed 7.878s CPU time. Dec 13 14:16:32.401504 containerd[1486]: time="2024-12-13T14:16:32.401454952Z" level=info msg="RemoveContainer for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" returns successfully" Dec 13 14:16:32.402636 kubelet[2779]: I1213 14:16:32.402609 2779 scope.go:117] "RemoveContainer" containerID="0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93" Dec 13 14:16:32.404715 systemd[1]: Removed slice kubepods-besteffort-podcfd11f4c_33a7_49d9_973a_5796bd640759.slice - libcontainer container kubepods-besteffort-podcfd11f4c_33a7_49d9_973a_5796bd640759.slice. Dec 13 14:16:32.412102 containerd[1486]: time="2024-12-13T14:16:32.411669555Z" level=info msg="RemoveContainer for \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\"" Dec 13 14:16:32.415211 containerd[1486]: time="2024-12-13T14:16:32.415165167Z" level=info msg="RemoveContainer for \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\" returns successfully" Dec 13 14:16:32.415470 kubelet[2779]: I1213 14:16:32.415346 2779 scope.go:117] "RemoveContainer" containerID="176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718" Dec 13 14:16:32.419001 containerd[1486]: time="2024-12-13T14:16:32.417758797Z" level=info msg="RemoveContainer for \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\"" Dec 13 14:16:32.422597 containerd[1486]: time="2024-12-13T14:16:32.422225231Z" level=info msg="RemoveContainer for \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\" returns successfully" Dec 13 14:16:32.422900 kubelet[2779]: I1213 14:16:32.422452 2779 scope.go:117] "RemoveContainer" containerID="687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698" Dec 13 14:16:32.424297 containerd[1486]: time="2024-12-13T14:16:32.424267392Z" level=info msg="RemoveContainer for \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\"" Dec 13 14:16:32.427302 containerd[1486]: time="2024-12-13T14:16:32.427273414Z" level=info msg="RemoveContainer for \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\" returns successfully" Dec 13 14:16:32.427701 kubelet[2779]: I1213 14:16:32.427629 2779 scope.go:117] "RemoveContainer" containerID="bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9" Dec 13 14:16:32.428879 containerd[1486]: time="2024-12-13T14:16:32.428788385Z" level=info msg="RemoveContainer for \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\"" Dec 13 14:16:32.432262 containerd[1486]: time="2024-12-13T14:16:32.432224078Z" level=info msg="RemoveContainer for \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\" returns successfully" Dec 13 14:16:32.432519 kubelet[2779]: I1213 14:16:32.432494 2779 scope.go:117] "RemoveContainer" containerID="de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f" Dec 13 14:16:32.432742 containerd[1486]: time="2024-12-13T14:16:32.432673350Z" level=error msg="ContainerStatus for \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\": not found" Dec 13 14:16:32.432904 kubelet[2779]: E1213 14:16:32.432845 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\": not found" containerID="de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f" Dec 13 14:16:32.433181 kubelet[2779]: I1213 14:16:32.432911 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f"} err="failed to get container status \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"de5b24625c8f6a721c39b48dac4eae97bac09aa964fce286db14b55f01affb7f\": not found" Dec 13 14:16:32.433181 kubelet[2779]: I1213 14:16:32.433009 2779 scope.go:117] "RemoveContainer" containerID="0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93" Dec 13 14:16:32.433288 containerd[1486]: time="2024-12-13T14:16:32.433248739Z" level=error msg="ContainerStatus for \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\": not found" Dec 13 14:16:32.434361 kubelet[2779]: E1213 14:16:32.434244 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\": not found" containerID="0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93" Dec 13 14:16:32.434361 kubelet[2779]: I1213 14:16:32.434289 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93"} err="failed to get container status \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ec7f1e5dadd73d2796e8270f1881657a2f9a6406dcf7988e91326751b6d2c93\": not found" Dec 13 14:16:32.434361 kubelet[2779]: I1213 14:16:32.434305 2779 scope.go:117] "RemoveContainer" containerID="176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718" Dec 13 14:16:32.434621 containerd[1486]: time="2024-12-13T14:16:32.434471115Z" level=error msg="ContainerStatus for \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\": not found" Dec 13 14:16:32.434837 kubelet[2779]: E1213 14:16:32.434724 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\": not found" containerID="176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718" Dec 13 14:16:32.434837 kubelet[2779]: I1213 14:16:32.434760 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718"} err="failed to get container status \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\": rpc error: code = NotFound desc = an error occurred when try to find container \"176397b28b37119c1c78538aced9c56022e75d0622a26f1d56d0de8ff234f718\": not found" Dec 13 14:16:32.434837 kubelet[2779]: I1213 14:16:32.434776 2779 scope.go:117] "RemoveContainer" containerID="687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698" Dec 13 14:16:32.435363 containerd[1486]: time="2024-12-13T14:16:32.435316019Z" level=error msg="ContainerStatus for \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\": not found" Dec 13 14:16:32.435643 kubelet[2779]: E1213 14:16:32.435432 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\": not found" containerID="687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698" Dec 13 14:16:32.435643 kubelet[2779]: I1213 14:16:32.435450 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698"} err="failed to get container status \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\": rpc error: code = NotFound desc = an error occurred when try to find container \"687c6330c40d5fa770a72300c852c15b7ace2142f268b2d0648055cd04ed4698\": not found" Dec 13 14:16:32.435643 kubelet[2779]: I1213 14:16:32.435466 2779 scope.go:117] "RemoveContainer" containerID="bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9" Dec 13 14:16:32.435935 containerd[1486]: time="2024-12-13T14:16:32.435838449Z" level=error msg="ContainerStatus for \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\": not found" Dec 13 14:16:32.436022 kubelet[2779]: E1213 14:16:32.435998 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\": not found" containerID="bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9" Dec 13 14:16:32.436066 kubelet[2779]: I1213 14:16:32.436028 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9"} err="failed to get container status \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb132cd0cdf12bb790b4d4c848e5de3e53b48d31b43339d4a125a0fad0cf54b9\": not found" Dec 13 14:16:32.436066 kubelet[2779]: I1213 14:16:32.436044 2779 scope.go:117] "RemoveContainer" containerID="18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d" Dec 13 14:16:32.437223 containerd[1486]: time="2024-12-13T14:16:32.437177063Z" level=info msg="RemoveContainer for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\"" Dec 13 14:16:32.439823 containerd[1486]: time="2024-12-13T14:16:32.439783413Z" level=info msg="RemoveContainer for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" returns successfully" Dec 13 14:16:32.440279 kubelet[2779]: I1213 14:16:32.440015 2779 scope.go:117] "RemoveContainer" containerID="18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d" Dec 13 14:16:32.440353 containerd[1486]: time="2024-12-13T14:16:32.440215964Z" level=error msg="ContainerStatus for \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\": not found" Dec 13 14:16:32.440385 kubelet[2779]: E1213 14:16:32.440319 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\": not found" containerID="18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d" Dec 13 14:16:32.440385 kubelet[2779]: I1213 14:16:32.440342 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d"} err="failed to get container status \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\": rpc error: code = NotFound desc = an error occurred when try to find container \"18391c66700f0495ef23e982355d06a17a1f7e04a77ab73a34c1ba58a5aef95d\": not found" Dec 13 14:16:32.690294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c-rootfs.mount: Deactivated successfully. Dec 13 14:16:32.690496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c84a7084f61943129e5260f5e2ab388cb42a09403b967e1d0a581a809763e09c-shm.mount: Deactivated successfully. Dec 13 14:16:32.690675 systemd[1]: var-lib-kubelet-pods-cfd11f4c\x2d33a7\x2d49d9\x2d973a\x2d5796bd640759-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6z6v.mount: Deactivated successfully. Dec 13 14:16:32.690790 systemd[1]: var-lib-kubelet-pods-1a43c596\x2dcf6e\x2d4ef0\x2daad5\x2d55fc345d4d33-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2tgdl.mount: Deactivated successfully. Dec 13 14:16:32.692155 systemd[1]: var-lib-kubelet-pods-1a43c596\x2dcf6e\x2d4ef0\x2daad5\x2d55fc345d4d33-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:16:32.692316 systemd[1]: var-lib-kubelet-pods-1a43c596\x2dcf6e\x2d4ef0\x2daad5\x2d55fc345d4d33-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:16:33.422625 kubelet[2779]: I1213 14:16:33.421117 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" path="/var/lib/kubelet/pods/1a43c596-cf6e-4ef0-aad5-55fc345d4d33/volumes" Dec 13 14:16:33.422625 kubelet[2779]: I1213 14:16:33.422111 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd11f4c-33a7-49d9-973a-5796bd640759" path="/var/lib/kubelet/pods/cfd11f4c-33a7-49d9-973a-5796bd640759/volumes" Dec 13 14:16:33.764215 sshd[4366]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:33.769512 systemd[1]: sshd@21-168.119.51.76:22-139.178.68.195:47558.service: Deactivated successfully. Dec 13 14:16:33.772756 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:16:33.774943 systemd[1]: session-22.scope: Consumed 1.281s CPU time. Dec 13 14:16:33.777284 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:16:33.778935 systemd-logind[1463]: Removed session 22. Dec 13 14:16:33.940669 systemd[1]: Started sshd@22-168.119.51.76:22-139.178.68.195:47560.service - OpenSSH per-connection server daemon (139.178.68.195:47560). Dec 13 14:16:34.928853 sshd[4530]: Accepted publickey for core from 139.178.68.195 port 47560 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:34.930835 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:34.937566 systemd-logind[1463]: New session 23 of user core. Dec 13 14:16:34.944057 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 14:16:36.143764 kubelet[2779]: I1213 14:16:36.143572 2779 topology_manager.go:215] "Topology Admit Handler" podUID="eff9cc09-6647-4510-8196-5fed6c4a7949" podNamespace="kube-system" podName="cilium-xfbnv" Dec 13 14:16:36.144486 kubelet[2779]: E1213 14:16:36.144047 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" containerName="mount-bpf-fs" Dec 13 14:16:36.144486 kubelet[2779]: E1213 14:16:36.144066 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" containerName="cilium-agent" Dec 13 14:16:36.144486 kubelet[2779]: E1213 14:16:36.144073 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" containerName="mount-cgroup" Dec 13 14:16:36.144486 kubelet[2779]: E1213 14:16:36.144079 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" containerName="apply-sysctl-overwrites" Dec 13 14:16:36.144486 kubelet[2779]: E1213 14:16:36.144085 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" containerName="clean-cilium-state" Dec 13 14:16:36.144486 kubelet[2779]: E1213 14:16:36.144091 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfd11f4c-33a7-49d9-973a-5796bd640759" containerName="cilium-operator" Dec 13 14:16:36.147277 kubelet[2779]: I1213 14:16:36.144855 2779 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd11f4c-33a7-49d9-973a-5796bd640759" containerName="cilium-operator" Dec 13 14:16:36.147277 kubelet[2779]: I1213 14:16:36.144898 2779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a43c596-cf6e-4ef0-aad5-55fc345d4d33" containerName="cilium-agent" Dec 13 14:16:36.153622 systemd[1]: Created slice kubepods-burstable-podeff9cc09_6647_4510_8196_5fed6c4a7949.slice - libcontainer container kubepods-burstable-podeff9cc09_6647_4510_8196_5fed6c4a7949.slice. Dec 13 14:16:36.192046 kubelet[2779]: I1213 14:16:36.191560 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-xtables-lock\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192046 kubelet[2779]: I1213 14:16:36.191602 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6lkw\" (UniqueName: \"kubernetes.io/projected/eff9cc09-6647-4510-8196-5fed6c4a7949-kube-api-access-x6lkw\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192046 kubelet[2779]: I1213 14:16:36.191624 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-hostproc\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192046 kubelet[2779]: I1213 14:16:36.191643 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-cilium-cgroup\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192046 kubelet[2779]: I1213 14:16:36.191659 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eff9cc09-6647-4510-8196-5fed6c4a7949-clustermesh-secrets\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192046 kubelet[2779]: I1213 14:16:36.191685 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eff9cc09-6647-4510-8196-5fed6c4a7949-cilium-ipsec-secrets\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192490 kubelet[2779]: I1213 14:16:36.191699 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eff9cc09-6647-4510-8196-5fed6c4a7949-hubble-tls\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192490 kubelet[2779]: I1213 14:16:36.191713 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-etc-cni-netd\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192490 kubelet[2779]: I1213 14:16:36.191734 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-host-proc-sys-net\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192490 kubelet[2779]: I1213 14:16:36.191753 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eff9cc09-6647-4510-8196-5fed6c4a7949-cilium-config-path\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192490 kubelet[2779]: I1213 14:16:36.191767 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-cni-path\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192490 kubelet[2779]: I1213 14:16:36.191784 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-bpf-maps\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192619 kubelet[2779]: I1213 14:16:36.191799 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-host-proc-sys-kernel\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192619 kubelet[2779]: I1213 14:16:36.191815 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-cilium-run\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.192619 kubelet[2779]: I1213 14:16:36.191829 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eff9cc09-6647-4510-8196-5fed6c4a7949-lib-modules\") pod \"cilium-xfbnv\" (UID: \"eff9cc09-6647-4510-8196-5fed6c4a7949\") " pod="kube-system/cilium-xfbnv" Dec 13 14:16:36.322110 sshd[4530]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:36.328102 systemd[1]: sshd@22-168.119.51.76:22-139.178.68.195:47560.service: Deactivated successfully. Dec 13 14:16:36.331286 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:16:36.332715 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:16:36.334650 systemd-logind[1463]: Removed session 23. Dec 13 14:16:36.459414 containerd[1486]: time="2024-12-13T14:16:36.459357535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfbnv,Uid:eff9cc09-6647-4510-8196-5fed6c4a7949,Namespace:kube-system,Attempt:0,}" Dec 13 14:16:36.491089 containerd[1486]: time="2024-12-13T14:16:36.490815960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:16:36.491089 containerd[1486]: time="2024-12-13T14:16:36.490966358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:16:36.491089 containerd[1486]: time="2024-12-13T14:16:36.490984357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:36.492355 containerd[1486]: time="2024-12-13T14:16:36.491929660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:36.493523 systemd[1]: Started sshd@23-168.119.51.76:22-139.178.68.195:41126.service - OpenSSH per-connection server daemon (139.178.68.195:41126). Dec 13 14:16:36.510033 systemd[1]: Started cri-containerd-6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e.scope - libcontainer container 6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e. Dec 13 14:16:36.533317 containerd[1486]: time="2024-12-13T14:16:36.533278545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfbnv,Uid:eff9cc09-6647-4510-8196-5fed6c4a7949,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\"" Dec 13 14:16:36.538143 containerd[1486]: time="2024-12-13T14:16:36.538104417Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:16:36.548021 containerd[1486]: time="2024-12-13T14:16:36.547941238Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c\"" Dec 13 14:16:36.550587 containerd[1486]: time="2024-12-13T14:16:36.549883962Z" level=info msg="StartContainer for \"b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c\"" Dec 13 14:16:36.576054 systemd[1]: Started cri-containerd-b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c.scope - libcontainer container b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c. Dec 13 14:16:36.600108 kubelet[2779]: E1213 14:16:36.600059 2779 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:16:36.605296 containerd[1486]: time="2024-12-13T14:16:36.605157393Z" level=info msg="StartContainer for \"b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c\" returns successfully" Dec 13 14:16:36.651313 systemd[1]: cri-containerd-b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c.scope: Deactivated successfully. Dec 13 14:16:36.683980 containerd[1486]: time="2024-12-13T14:16:36.683895356Z" level=info msg="shim disconnected" id=b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c namespace=k8s.io Dec 13 14:16:36.684357 containerd[1486]: time="2024-12-13T14:16:36.684323468Z" level=warning msg="cleaning up after shim disconnected" id=b800a17e9714ea6d56866b8dba4d7c43cf3e2c5706726939196c9d6668a4b91c namespace=k8s.io Dec 13 14:16:36.684476 containerd[1486]: time="2024-12-13T14:16:36.684447626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:37.420828 containerd[1486]: time="2024-12-13T14:16:37.420203142Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:16:37.434690 containerd[1486]: time="2024-12-13T14:16:37.434642202Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6\"" Dec 13 14:16:37.435452 containerd[1486]: time="2024-12-13T14:16:37.435329710Z" level=info msg="StartContainer for \"f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6\"" Dec 13 14:16:37.473038 systemd[1]: Started cri-containerd-f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6.scope - libcontainer container f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6. Dec 13 14:16:37.479630 sshd[4560]: Accepted publickey for core from 139.178.68.195 port 41126 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:37.481200 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:37.490376 systemd-logind[1463]: New session 24 of user core. Dec 13 14:16:37.495061 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 14:16:37.508085 containerd[1486]: time="2024-12-13T14:16:37.507255855Z" level=info msg="StartContainer for \"f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6\" returns successfully" Dec 13 14:16:37.560490 systemd[1]: cri-containerd-f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6.scope: Deactivated successfully. Dec 13 14:16:37.584309 containerd[1486]: time="2024-12-13T14:16:37.584191550Z" level=info msg="shim disconnected" id=f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6 namespace=k8s.io Dec 13 14:16:37.584309 containerd[1486]: time="2024-12-13T14:16:37.584258829Z" level=warning msg="cleaning up after shim disconnected" id=f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6 namespace=k8s.io Dec 13 14:16:37.584309 containerd[1486]: time="2024-12-13T14:16:37.584269389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:38.156608 sshd[4560]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:38.161525 systemd[1]: sshd@23-168.119.51.76:22-139.178.68.195:41126.service: Deactivated successfully. Dec 13 14:16:38.164698 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:16:38.167957 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:16:38.169103 systemd-logind[1463]: Removed session 24. Dec 13 14:16:38.303976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29e23dc5e814f347890a2ea6ba98657f4cb6c7085d2c110eca44357765ba9d6-rootfs.mount: Deactivated successfully. Dec 13 14:16:38.333232 systemd[1]: Started sshd@24-168.119.51.76:22-139.178.68.195:41142.service - OpenSSH per-connection server daemon (139.178.68.195:41142). Dec 13 14:16:38.362052 kubelet[2779]: I1213 14:16:38.361594 2779 setters.go:580] "Node became not ready" node="ci-4081-2-1-a-7dfc9bce8d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:16:38Z","lastTransitionTime":"2024-12-13T14:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:16:38.431146 containerd[1486]: time="2024-12-13T14:16:38.430485824Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:16:38.453464 containerd[1486]: time="2024-12-13T14:16:38.453388497Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41\"" Dec 13 14:16:38.454187 containerd[1486]: time="2024-12-13T14:16:38.454146124Z" level=info msg="StartContainer for \"2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41\"" Dec 13 14:16:38.499195 systemd[1]: Started cri-containerd-2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41.scope - libcontainer container 2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41. Dec 13 14:16:38.535167 systemd[1]: cri-containerd-2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41.scope: Deactivated successfully. Dec 13 14:16:38.538634 containerd[1486]: time="2024-12-13T14:16:38.538574545Z" level=info msg="StartContainer for \"2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41\" returns successfully" Dec 13 14:16:38.568725 containerd[1486]: time="2024-12-13T14:16:38.568521854Z" level=info msg="shim disconnected" id=2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41 namespace=k8s.io Dec 13 14:16:38.568725 containerd[1486]: time="2024-12-13T14:16:38.568579213Z" level=warning msg="cleaning up after shim disconnected" id=2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41 namespace=k8s.io Dec 13 14:16:38.568725 containerd[1486]: time="2024-12-13T14:16:38.568586933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:39.302644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cfdfe136b15ac3ba033977fe7b258d53bc249c01de1d9c82aa77783756afd41-rootfs.mount: Deactivated successfully. Dec 13 14:16:39.329179 sshd[4719]: Accepted publickey for core from 139.178.68.195 port 41142 ssh2: RSA SHA256:xQ9dEjBaxmyM6gqQv69t+Ql8O4jhwD/hzklEIAjQOiI Dec 13 14:16:39.331259 sshd[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:16:39.337063 systemd-logind[1463]: New session 25 of user core. Dec 13 14:16:39.344033 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 14:16:39.434429 containerd[1486]: time="2024-12-13T14:16:39.434375632Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:16:39.452066 containerd[1486]: time="2024-12-13T14:16:39.452014283Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd\"" Dec 13 14:16:39.452951 containerd[1486]: time="2024-12-13T14:16:39.452667232Z" level=info msg="StartContainer for \"6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd\"" Dec 13 14:16:39.490029 systemd[1]: Started cri-containerd-6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd.scope - libcontainer container 6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd. Dec 13 14:16:39.516694 systemd[1]: cri-containerd-6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd.scope: Deactivated successfully. Dec 13 14:16:39.519635 containerd[1486]: time="2024-12-13T14:16:39.518667277Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff9cc09_6647_4510_8196_5fed6c4a7949.slice/cri-containerd-6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd.scope/memory.events\": no such file or directory" Dec 13 14:16:39.521807 containerd[1486]: time="2024-12-13T14:16:39.521677264Z" level=info msg="StartContainer for \"6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd\" returns successfully" Dec 13 14:16:39.543795 containerd[1486]: time="2024-12-13T14:16:39.543731438Z" level=info msg="shim disconnected" id=6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd namespace=k8s.io Dec 13 14:16:39.543795 containerd[1486]: time="2024-12-13T14:16:39.543789637Z" level=warning msg="cleaning up after shim disconnected" id=6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd namespace=k8s.io Dec 13 14:16:39.543795 containerd[1486]: time="2024-12-13T14:16:39.543800077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:16:40.302164 systemd[1]: run-containerd-runc-k8s.io-6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd-runc.l8ZoCq.mount: Deactivated successfully. Dec 13 14:16:40.302268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6790d1069ef20f813d224e3be84327238df6080547cd47f40695cd230aaf9ffd-rootfs.mount: Deactivated successfully. Dec 13 14:16:40.445431 containerd[1486]: time="2024-12-13T14:16:40.444561740Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:16:40.465053 containerd[1486]: time="2024-12-13T14:16:40.465003748Z" level=info msg="CreateContainer within sandbox \"6f5a7418902e7a9533dc224971ac93512dba0d535fbd208ff15e8b3d3bc3f48e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3eb12a616ad762bd6fbe92c6cf5f06db9d6593379f663f97b262262bf844728\"" Dec 13 14:16:40.466835 containerd[1486]: time="2024-12-13T14:16:40.466807996Z" level=info msg="StartContainer for \"c3eb12a616ad762bd6fbe92c6cf5f06db9d6593379f663f97b262262bf844728\"" Dec 13 14:16:40.505020 systemd[1]: Started cri-containerd-c3eb12a616ad762bd6fbe92c6cf5f06db9d6593379f663f97b262262bf844728.scope - libcontainer container c3eb12a616ad762bd6fbe92c6cf5f06db9d6593379f663f97b262262bf844728. Dec 13 14:16:40.539728 containerd[1486]: time="2024-12-13T14:16:40.539477102Z" level=info msg="StartContainer for \"c3eb12a616ad762bd6fbe92c6cf5f06db9d6593379f663f97b262262bf844728\" returns successfully" Dec 13 14:16:40.834934 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 14:16:43.692089 systemd-networkd[1376]: lxc_health: Link UP Dec 13 14:16:43.700291 systemd-networkd[1376]: lxc_health: Gained carrier Dec 13 14:16:44.143758 systemd[1]: run-containerd-runc-k8s.io-c3eb12a616ad762bd6fbe92c6cf5f06db9d6593379f663f97b262262bf844728-runc.3YPxhY.mount: Deactivated successfully. Dec 13 14:16:44.478515 kubelet[2779]: I1213 14:16:44.478449 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xfbnv" podStartSLOduration=8.47843453 podStartE2EDuration="8.47843453s" podCreationTimestamp="2024-12-13 14:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:16:41.484268757 +0000 UTC m=+370.178806069" watchObservedRunningTime="2024-12-13 14:16:44.47843453 +0000 UTC m=+373.172971882" Dec 13 14:16:44.873002 systemd-networkd[1376]: lxc_health: Gained IPv6LL Dec 13 14:16:50.685138 kubelet[2779]: E1213 14:16:50.685097 2779 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34936->127.0.0.1:33155: write tcp 127.0.0.1:34936->127.0.0.1:33155: write: broken pipe Dec 13 14:16:50.847028 sshd[4719]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:50.852439 systemd[1]: sshd@24-168.119.51.76:22-139.178.68.195:41142.service: Deactivated successfully. Dec 13 14:16:50.856108 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:16:50.861706 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:16:50.863963 systemd-logind[1463]: Removed session 25. Dec 13 14:17:06.701226 kubelet[2779]: E1213 14:17:06.701056 2779 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38830->10.0.0.2:2379: read: connection timed out" Dec 13 14:17:06.708515 systemd[1]: cri-containerd-6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6.scope: Deactivated successfully. Dec 13 14:17:06.709513 systemd[1]: cri-containerd-6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6.scope: Consumed 3.069s CPU time, 17.6M memory peak, 0B memory swap peak. Dec 13 14:17:06.739045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6-rootfs.mount: Deactivated successfully. Dec 13 14:17:06.744652 containerd[1486]: time="2024-12-13T14:17:06.744585119Z" level=info msg="shim disconnected" id=6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6 namespace=k8s.io Dec 13 14:17:06.744652 containerd[1486]: time="2024-12-13T14:17:06.744655838Z" level=warning msg="cleaning up after shim disconnected" id=6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6 namespace=k8s.io Dec 13 14:17:06.745228 containerd[1486]: time="2024-12-13T14:17:06.744666918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:17:07.524594 kubelet[2779]: I1213 14:17:07.524356 2779 scope.go:117] "RemoveContainer" containerID="6616603ac89c6a54f5d272c5f0c3e13b61230967729c902d7d759fbff1681af6" Dec 13 14:17:07.529943 containerd[1486]: time="2024-12-13T14:17:07.529689965Z" level=info msg="CreateContainer within sandbox \"10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:17:07.544762 containerd[1486]: time="2024-12-13T14:17:07.544718516Z" level=info msg="CreateContainer within sandbox \"10ed1f064b7d0d4202bdf8f0221d9d291ac305ed2712be2e500944a6caf4c385\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"312df8aeb655bcceca329ede07ef2b600211d3b0ca2dabc8adc710e892b1c949\"" Dec 13 14:17:07.545260 containerd[1486]: time="2024-12-13T14:17:07.545237990Z" level=info msg="StartContainer for \"312df8aeb655bcceca329ede07ef2b600211d3b0ca2dabc8adc710e892b1c949\"" Dec 13 14:17:07.581120 systemd[1]: Started cri-containerd-312df8aeb655bcceca329ede07ef2b600211d3b0ca2dabc8adc710e892b1c949.scope - libcontainer container 312df8aeb655bcceca329ede07ef2b600211d3b0ca2dabc8adc710e892b1c949. Dec 13 14:17:07.619693 containerd[1486]: time="2024-12-13T14:17:07.619634190Z" level=info msg="StartContainer for \"312df8aeb655bcceca329ede07ef2b600211d3b0ca2dabc8adc710e892b1c949\" returns successfully" Dec 13 14:17:07.739979 systemd[1]: run-containerd-runc-k8s.io-312df8aeb655bcceca329ede07ef2b600211d3b0ca2dabc8adc710e892b1c949-runc.305CMD.mount: Deactivated successfully. Dec 13 14:17:07.877568 systemd[1]: cri-containerd-f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0.scope: Deactivated successfully. Dec 13 14:17:07.878967 systemd[1]: cri-containerd-f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0.scope: Consumed 6.000s CPU time, 20.3M memory peak, 0B memory swap peak. Dec 13 14:17:07.907562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0-rootfs.mount: Deactivated successfully. Dec 13 14:17:07.915044 containerd[1486]: time="2024-12-13T14:17:07.914640060Z" level=info msg="shim disconnected" id=f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0 namespace=k8s.io Dec 13 14:17:07.915044 containerd[1486]: time="2024-12-13T14:17:07.914694179Z" level=warning msg="cleaning up after shim disconnected" id=f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0 namespace=k8s.io Dec 13 14:17:07.915044 containerd[1486]: time="2024-12-13T14:17:07.914748458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:17:08.529961 kubelet[2779]: I1213 14:17:08.529732 2779 scope.go:117] "RemoveContainer" containerID="f0f8b0e6f308cb7f556771917f71cdc3ac8ea44a4df13c86326e34ea8eb9d6e0" Dec 13 14:17:08.537545 containerd[1486]: time="2024-12-13T14:17:08.535601276Z" level=info msg="CreateContainer within sandbox \"a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:17:08.560123 containerd[1486]: time="2024-12-13T14:17:08.560084484Z" level=info msg="CreateContainer within sandbox \"a01a5252e40c756724d3ffcb7995c8bb8ad13de11510b02e9691bf96bd84d3d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e455512dad11d108deadb01de4b4ddd9314a2c0ee5e08ae4b781bf4d52efdf4c\"" Dec 13 14:17:08.561046 containerd[1486]: time="2024-12-13T14:17:08.561024154Z" level=info msg="StartContainer for \"e455512dad11d108deadb01de4b4ddd9314a2c0ee5e08ae4b781bf4d52efdf4c\"" Dec 13 14:17:08.598054 systemd[1]: Started cri-containerd-e455512dad11d108deadb01de4b4ddd9314a2c0ee5e08ae4b781bf4d52efdf4c.scope - libcontainer container e455512dad11d108deadb01de4b4ddd9314a2c0ee5e08ae4b781bf4d52efdf4c. Dec 13 14:17:08.639435 containerd[1486]: time="2024-12-13T14:17:08.639027889Z" level=info msg="StartContainer for \"e455512dad11d108deadb01de4b4ddd9314a2c0ee5e08ae4b781bf4d52efdf4c\" returns successfully" Dec 13 14:17:11.882569 kubelet[2779]: E1213 14:17:11.882395 2779 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38608->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-a-7dfc9bce8d.1810c23c751c03da kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-a-7dfc9bce8d,UID:de35788c7594c955510759696d9a1ada,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-a-7dfc9bce8d,},FirstTimestamp:2024-12-13 14:17:01.45623753 +0000 UTC m=+390.150774922,LastTimestamp:2024-12-13 14:17:01.45623753 +0000 UTC m=+390.150774922,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-a-7dfc9bce8d,}"