Nov 12 17:49:08.906275 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 17:49:08.906296 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:49:08.906305 kernel: KASLR enabled Nov 12 17:49:08.906311 kernel: efi: EFI v2.7 by EDK II Nov 12 17:49:08.906317 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Nov 12 17:49:08.906323 kernel: random: crng init done Nov 12 17:49:08.906330 kernel: ACPI: Early table checksum verification disabled Nov 12 17:49:08.906335 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Nov 12 17:49:08.906342 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 17:49:08.906349 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906355 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906361 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906367 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906372 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906380 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906388 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906394 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906400 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:49:08.906407 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 17:49:08.906413 kernel: NUMA: Failed to initialise from firmware Nov 12 17:49:08.906419 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:49:08.906425 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 12 17:49:08.906432 kernel: Zone ranges: Nov 12 17:49:08.906438 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:49:08.906444 kernel: DMA32 empty Nov 12 17:49:08.906452 kernel: Normal empty Nov 12 17:49:08.906458 kernel: Movable zone start for each node Nov 12 17:49:08.906464 kernel: Early memory node ranges Nov 12 17:49:08.906470 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 17:49:08.906477 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 17:49:08.906483 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 17:49:08.906489 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 17:49:08.906495 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 17:49:08.906502 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 17:49:08.906508 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 17:49:08.906514 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:49:08.906521 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 17:49:08.906528 kernel: psci: probing for conduit method from ACPI. Nov 12 17:49:08.906535 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 17:49:08.906548 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:49:08.906559 kernel: psci: Trusted OS migration not required Nov 12 17:49:08.906566 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:49:08.906573 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 17:49:08.906581 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:49:08.906588 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:49:08.906595 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 17:49:08.906602 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:49:08.906608 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:49:08.906615 kernel: CPU features: detected: Hardware dirty bit management Nov 12 17:49:08.906621 kernel: CPU features: detected: Spectre-v4 Nov 12 17:49:08.906628 kernel: CPU features: detected: Spectre-BHB Nov 12 17:49:08.906635 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 17:49:08.906641 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 17:49:08.906649 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 17:49:08.906656 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 17:49:08.906663 kernel: alternatives: applying boot alternatives Nov 12 17:49:08.906670 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:49:08.906677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:49:08.906684 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:49:08.906691 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:49:08.906697 kernel: Fallback order for Node 0: 0 Nov 12 17:49:08.906704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 17:49:08.906711 kernel: Policy zone: DMA Nov 12 17:49:08.906717 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:49:08.906725 kernel: software IO TLB: area num 4. Nov 12 17:49:08.906732 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 17:49:08.906739 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Nov 12 17:49:08.906746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 17:49:08.906752 kernel: trace event string verifier disabled Nov 12 17:49:08.906759 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:49:08.906766 kernel: rcu: RCU event tracing is enabled. Nov 12 17:49:08.906773 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 17:49:08.906843 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:49:08.906850 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:49:08.906857 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:49:08.906864 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 17:49:08.906873 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:49:08.906880 kernel: GICv3: 256 SPIs implemented Nov 12 17:49:08.906886 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:49:08.906893 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:49:08.906900 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 17:49:08.906906 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 17:49:08.906913 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 17:49:08.906920 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:49:08.906927 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:49:08.906934 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 17:49:08.906941 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 17:49:08.906949 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:49:08.906955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:49:08.906962 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 17:49:08.906969 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 17:49:08.906976 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 17:49:08.906983 kernel: arm-pv: using stolen time PV Nov 12 17:49:08.906990 kernel: Console: colour dummy device 80x25 Nov 12 17:49:08.906997 kernel: ACPI: Core revision 20230628 Nov 12 17:49:08.907004 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 17:49:08.907011 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:49:08.907019 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:49:08.907026 kernel: landlock: Up and running. Nov 12 17:49:08.907033 kernel: SELinux: Initializing. Nov 12 17:49:08.907040 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:49:08.907047 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:49:08.907054 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:49:08.907061 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:49:08.907068 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:49:08.907075 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:49:08.907083 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 17:49:08.907090 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 17:49:08.907096 kernel: Remapping and enabling EFI services. Nov 12 17:49:08.907103 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:49:08.907110 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:49:08.907117 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 17:49:08.907124 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 17:49:08.907131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:49:08.907138 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 17:49:08.907145 kernel: Detected PIPT I-cache on CPU2 Nov 12 17:49:08.907153 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 17:49:08.907161 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 17:49:08.907173 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:49:08.907181 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 17:49:08.907189 kernel: Detected PIPT I-cache on CPU3 Nov 12 17:49:08.907196 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 17:49:08.907203 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 17:49:08.907211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:49:08.907218 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 17:49:08.907226 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 17:49:08.907234 kernel: SMP: Total of 4 processors activated. Nov 12 17:49:08.907241 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:49:08.907249 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 17:49:08.907256 kernel: CPU features: detected: Common not Private translations Nov 12 17:49:08.907263 kernel: CPU features: detected: CRC32 instructions Nov 12 17:49:08.907271 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 17:49:08.907278 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 17:49:08.907286 kernel: CPU features: detected: LSE atomic instructions Nov 12 17:49:08.907294 kernel: CPU features: detected: Privileged Access Never Nov 12 17:49:08.907301 kernel: CPU features: detected: RAS Extension Support Nov 12 17:49:08.907308 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 17:49:08.907315 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:49:08.907323 kernel: alternatives: applying system-wide alternatives Nov 12 17:49:08.907330 kernel: devtmpfs: initialized Nov 12 17:49:08.907337 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:49:08.907345 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 17:49:08.907354 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:49:08.907361 kernel: SMBIOS 3.0.0 present. Nov 12 17:49:08.907368 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Nov 12 17:49:08.907375 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:49:08.907383 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:49:08.907390 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:49:08.907397 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:49:08.907405 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:49:08.907412 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 12 17:49:08.907420 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:49:08.907427 kernel: cpuidle: using governor menu Nov 12 17:49:08.907435 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:49:08.907442 kernel: ASID allocator initialised with 32768 entries Nov 12 17:49:08.907449 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:49:08.907456 kernel: Serial: AMBA PL011 UART driver Nov 12 17:49:08.907464 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 17:49:08.907471 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 17:49:08.907478 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:49:08.907486 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:49:08.907494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:49:08.907501 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:49:08.907508 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:49:08.907516 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:49:08.907523 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:49:08.907530 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:49:08.907537 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:49:08.907550 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:49:08.907559 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:49:08.907566 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:49:08.907573 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:49:08.907581 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:49:08.907588 kernel: ACPI: Interpreter enabled Nov 12 17:49:08.907595 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:49:08.907602 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:49:08.907609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 17:49:08.907616 kernel: printk: console [ttyAMA0] enabled Nov 12 17:49:08.907625 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 17:49:08.907755 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:49:08.907843 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:49:08.907911 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:49:08.907975 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 17:49:08.908038 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 17:49:08.908047 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 17:49:08.908057 kernel: PCI host bridge to bus 0000:00 Nov 12 17:49:08.908127 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 17:49:08.908185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:49:08.908243 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 17:49:08.908300 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 17:49:08.908379 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 17:49:08.908453 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 17:49:08.908521 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 17:49:08.908598 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 17:49:08.908663 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:49:08.908727 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:49:08.908800 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 17:49:08.908867 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 17:49:08.908926 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 17:49:08.908985 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:49:08.909041 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 17:49:08.909051 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:49:08.909059 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:49:08.909066 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:49:08.909073 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:49:08.909080 kernel: iommu: Default domain type: Translated Nov 12 17:49:08.909088 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:49:08.909097 kernel: efivars: Registered efivars operations Nov 12 17:49:08.909104 kernel: vgaarb: loaded Nov 12 17:49:08.909112 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:49:08.909119 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:49:08.909126 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:49:08.909134 kernel: pnp: PnP ACPI init Nov 12 17:49:08.909207 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 17:49:08.909218 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:49:08.909226 kernel: NET: Registered PF_INET protocol family Nov 12 17:49:08.909234 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:49:08.909241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:49:08.909249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:49:08.909256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:49:08.909264 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:49:08.909271 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:49:08.909278 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:49:08.909285 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:49:08.909294 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:49:08.909301 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:49:08.909308 kernel: kvm [1]: HYP mode not available Nov 12 17:49:08.909316 kernel: Initialise system trusted keyrings Nov 12 17:49:08.909323 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:49:08.909330 kernel: Key type asymmetric registered Nov 12 17:49:08.909337 kernel: Asymmetric key parser 'x509' registered Nov 12 17:49:08.909345 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:49:08.909352 kernel: io scheduler mq-deadline registered Nov 12 17:49:08.909360 kernel: io scheduler kyber registered Nov 12 17:49:08.909367 kernel: io scheduler bfq registered Nov 12 17:49:08.909375 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:49:08.909382 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:49:08.909390 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:49:08.909454 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 17:49:08.909464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:49:08.909471 kernel: thunder_xcv, ver 1.0 Nov 12 17:49:08.909478 kernel: thunder_bgx, ver 1.0 Nov 12 17:49:08.909487 kernel: nicpf, ver 1.0 Nov 12 17:49:08.909494 kernel: nicvf, ver 1.0 Nov 12 17:49:08.909575 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:49:08.909638 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:49:08 UTC (1731433748) Nov 12 17:49:08.909647 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:49:08.909655 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 17:49:08.909662 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:49:08.909670 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:49:08.909679 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:49:08.909686 kernel: Segment Routing with IPv6 Nov 12 17:49:08.909693 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:49:08.909700 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:49:08.909708 kernel: Key type dns_resolver registered Nov 12 17:49:08.909715 kernel: registered taskstats version 1 Nov 12 17:49:08.909722 kernel: Loading compiled-in X.509 certificates Nov 12 17:49:08.909729 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:49:08.909737 kernel: Key type .fscrypt registered Nov 12 17:49:08.909745 kernel: Key type fscrypt-provisioning registered Nov 12 17:49:08.909752 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:49:08.909760 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:49:08.909767 kernel: ima: No architecture policies found Nov 12 17:49:08.909774 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:49:08.909790 kernel: clk: Disabling unused clocks Nov 12 17:49:08.909797 kernel: Freeing unused kernel memory: 39360K Nov 12 17:49:08.909804 kernel: Run /init as init process Nov 12 17:49:08.909811 kernel: with arguments: Nov 12 17:49:08.909820 kernel: /init Nov 12 17:49:08.909827 kernel: with environment: Nov 12 17:49:08.909834 kernel: HOME=/ Nov 12 17:49:08.909842 kernel: TERM=linux Nov 12 17:49:08.909849 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:49:08.909858 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:49:08.909867 systemd[1]: Detected virtualization kvm. Nov 12 17:49:08.909875 systemd[1]: Detected architecture arm64. Nov 12 17:49:08.909884 systemd[1]: Running in initrd. Nov 12 17:49:08.909891 systemd[1]: No hostname configured, using default hostname. Nov 12 17:49:08.909899 systemd[1]: Hostname set to . Nov 12 17:49:08.909907 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:49:08.909914 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:49:08.909922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:49:08.909930 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:49:08.909939 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:49:08.909948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:49:08.909956 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:49:08.909964 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:49:08.909973 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:49:08.909981 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:49:08.909989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:49:08.909999 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:49:08.910006 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:49:08.910014 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:49:08.910022 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:49:08.910030 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:49:08.910037 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:49:08.910045 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:49:08.910053 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:49:08.910061 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:49:08.910070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:49:08.910078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:49:08.910086 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:49:08.910094 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:49:08.910102 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:49:08.910110 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:49:08.910118 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:49:08.910125 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:49:08.910133 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:49:08.910142 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:49:08.910150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:49:08.910158 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:49:08.910166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:49:08.910174 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:49:08.910182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:49:08.910192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:49:08.910214 systemd-journald[237]: Collecting audit messages is disabled. Nov 12 17:49:08.910234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:49:08.910242 systemd-journald[237]: Journal started Nov 12 17:49:08.910261 systemd-journald[237]: Runtime Journal (/run/log/journal/b7238c766ec447ccb9547348335f69bb) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:49:08.901668 systemd-modules-load[238]: Inserted module 'overlay' Nov 12 17:49:08.913806 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:49:08.913830 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:49:08.916999 systemd-modules-load[238]: Inserted module 'br_netfilter' Nov 12 17:49:08.917858 kernel: Bridge firewalling registered Nov 12 17:49:08.917695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:49:08.926902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:49:08.928996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:49:08.931948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:49:08.935458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:49:08.943501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:49:08.944923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:49:08.947012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:49:08.949033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:49:08.952614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:49:08.954967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:49:08.967031 dracut-cmdline[277]: dracut-dracut-053 Nov 12 17:49:08.969450 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:49:08.984399 systemd-resolved[278]: Positive Trust Anchors: Nov 12 17:49:08.984418 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:49:08.984449 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:49:08.989266 systemd-resolved[278]: Defaulting to hostname 'linux'. Nov 12 17:49:08.990227 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:49:08.993677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:49:09.035792 kernel: SCSI subsystem initialized Nov 12 17:49:09.040806 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:49:09.048814 kernel: iscsi: registered transport (tcp) Nov 12 17:49:09.061132 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:49:09.061156 kernel: QLogic iSCSI HBA Driver Nov 12 17:49:09.101620 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:49:09.109987 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:49:09.126867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:49:09.126900 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:49:09.127941 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:49:09.174815 kernel: raid6: neonx8 gen() 15687 MB/s Nov 12 17:49:09.191819 kernel: raid6: neonx4 gen() 15563 MB/s Nov 12 17:49:09.208811 kernel: raid6: neonx2 gen() 13122 MB/s Nov 12 17:49:09.225812 kernel: raid6: neonx1 gen() 10388 MB/s Nov 12 17:49:09.242811 kernel: raid6: int64x8 gen() 6899 MB/s Nov 12 17:49:09.259812 kernel: raid6: int64x4 gen() 7297 MB/s Nov 12 17:49:09.276812 kernel: raid6: int64x2 gen() 6101 MB/s Nov 12 17:49:09.293882 kernel: raid6: int64x1 gen() 5031 MB/s Nov 12 17:49:09.293908 kernel: raid6: using algorithm neonx8 gen() 15687 MB/s Nov 12 17:49:09.311872 kernel: raid6: .... xor() 11870 MB/s, rmw enabled Nov 12 17:49:09.311904 kernel: raid6: using neon recovery algorithm Nov 12 17:49:09.317264 kernel: xor: measuring software checksum speed Nov 12 17:49:09.317277 kernel: 8regs : 19276 MB/sec Nov 12 17:49:09.317937 kernel: 32regs : 19664 MB/sec Nov 12 17:49:09.319160 kernel: arm64_neon : 26734 MB/sec Nov 12 17:49:09.319171 kernel: xor: using function: arm64_neon (26734 MB/sec) Nov 12 17:49:09.370808 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:49:09.381396 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:49:09.400929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:49:09.412961 systemd-udevd[463]: Using default interface naming scheme 'v255'. Nov 12 17:49:09.416170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:49:09.422919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:49:09.434258 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Nov 12 17:49:09.460459 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:49:09.472915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:49:09.512841 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:49:09.522962 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:49:09.532960 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:49:09.534605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:49:09.536294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:49:09.538857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:49:09.546917 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:49:09.557025 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:49:09.562984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:49:09.563174 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:49:09.568115 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 17:49:09.570693 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 17:49:09.570797 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:49:09.570809 kernel: GPT:9289727 != 19775487 Nov 12 17:49:09.570818 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:49:09.570827 kernel: GPT:9289727 != 19775487 Nov 12 17:49:09.570836 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:49:09.570849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:49:09.567238 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:49:09.570012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:49:09.570207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:49:09.575875 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:49:09.588027 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (519) Nov 12 17:49:09.588081 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Nov 12 17:49:09.590023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:49:09.602216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 17:49:09.607821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:49:09.612709 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 17:49:09.619429 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 17:49:09.620686 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 17:49:09.626369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:49:09.641934 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:49:09.643747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:49:09.648467 disk-uuid[551]: Primary Header is updated. Nov 12 17:49:09.648467 disk-uuid[551]: Secondary Entries is updated. Nov 12 17:49:09.648467 disk-uuid[551]: Secondary Header is updated. Nov 12 17:49:09.657881 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:49:09.660602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:49:09.663577 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:49:09.666794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:49:10.668837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:49:10.668907 disk-uuid[552]: The operation has completed successfully. Nov 12 17:49:10.690893 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:49:10.691014 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:49:10.711972 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:49:10.714860 sh[575]: Success Nov 12 17:49:10.725815 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:49:10.755615 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:49:10.771156 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:49:10.774534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:49:10.782086 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:49:10.782118 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:49:10.782128 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:49:10.783888 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:49:10.784788 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:49:10.787585 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:49:10.788939 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:49:10.804912 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:49:10.807044 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:49:10.813335 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:49:10.813391 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:49:10.813403 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:49:10.816800 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:49:10.823075 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:49:10.824809 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:49:10.829343 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:49:10.835954 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:49:10.901106 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:49:10.912931 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:49:10.935271 ignition[664]: Ignition 2.19.0 Nov 12 17:49:10.935281 ignition[664]: Stage: fetch-offline Nov 12 17:49:10.935322 ignition[664]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:49:10.936572 systemd-networkd[767]: lo: Link UP Nov 12 17:49:10.935330 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:49:10.936576 systemd-networkd[767]: lo: Gained carrier Nov 12 17:49:10.935480 ignition[664]: parsed url from cmdline: "" Nov 12 17:49:10.937463 systemd-networkd[767]: Enumeration completed Nov 12 17:49:10.935483 ignition[664]: no config URL provided Nov 12 17:49:10.937662 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:49:10.935488 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:49:10.937906 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:49:10.935494 ignition[664]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:49:10.937909 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:49:10.935515 ignition[664]: op(1): [started] loading QEMU firmware config module Nov 12 17:49:10.939471 systemd-networkd[767]: eth0: Link UP Nov 12 17:49:10.935519 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 17:49:10.939474 systemd-networkd[767]: eth0: Gained carrier Nov 12 17:49:10.943925 ignition[664]: op(1): [finished] loading QEMU firmware config module Nov 12 17:49:10.939482 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:49:10.939841 systemd[1]: Reached target network.target - Network. Nov 12 17:49:10.958820 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:49:10.991440 ignition[664]: parsing config with SHA512: 74aa72563f2cc49fae760d35fad767c63eeb32dcc26df44cd710b7194c673dd03e64dc647389dae1d5326bf54b450a4ca51160f58a1d56301ec10f19a2003910 Nov 12 17:49:10.995453 unknown[664]: fetched base config from "system" Nov 12 17:49:10.995462 unknown[664]: fetched user config from "qemu" Nov 12 17:49:10.996228 ignition[664]: fetch-offline: fetch-offline passed Nov 12 17:49:10.999151 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:49:10.996308 ignition[664]: Ignition finished successfully Nov 12 17:49:11.000466 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 17:49:11.010988 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:49:11.021014 ignition[774]: Ignition 2.19.0 Nov 12 17:49:11.021023 ignition[774]: Stage: kargs Nov 12 17:49:11.021177 ignition[774]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:49:11.021186 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:49:11.022047 ignition[774]: kargs: kargs passed Nov 12 17:49:11.025857 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:49:11.022095 ignition[774]: Ignition finished successfully Nov 12 17:49:11.036960 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:49:11.046140 ignition[783]: Ignition 2.19.0 Nov 12 17:49:11.046150 ignition[783]: Stage: disks Nov 12 17:49:11.046301 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:49:11.049118 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:49:11.046309 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:49:11.050285 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:49:11.047231 ignition[783]: disks: disks passed Nov 12 17:49:11.052010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:49:11.047276 ignition[783]: Ignition finished successfully Nov 12 17:49:11.053157 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:49:11.054920 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:49:11.056673 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:49:11.071918 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:49:11.081255 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:49:11.085679 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:49:11.093891 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:49:11.137804 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:49:11.137889 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:49:11.139128 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:49:11.151862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:49:11.154148 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:49:11.155164 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:49:11.155202 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:49:11.155223 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:49:11.161053 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:49:11.163401 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:49:11.168933 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Nov 12 17:49:11.168958 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:49:11.169000 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:49:11.169012 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:49:11.170796 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:49:11.172035 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:49:11.207362 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:49:11.211743 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:49:11.215821 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:49:11.219810 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:49:11.283146 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:49:11.293867 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:49:11.296193 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:49:11.301811 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:49:11.315880 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:49:11.317803 ignition[916]: INFO : Ignition 2.19.0 Nov 12 17:49:11.317803 ignition[916]: INFO : Stage: mount Nov 12 17:49:11.319338 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:49:11.319338 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:49:11.319338 ignition[916]: INFO : mount: mount passed Nov 12 17:49:11.319338 ignition[916]: INFO : Ignition finished successfully Nov 12 17:49:11.321838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:49:11.332879 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:49:11.781035 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:49:11.790955 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:49:11.797672 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Nov 12 17:49:11.797706 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:49:11.797717 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:49:11.799339 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:49:11.801804 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:49:11.802882 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:49:11.822950 ignition[947]: INFO : Ignition 2.19.0 Nov 12 17:49:11.822950 ignition[947]: INFO : Stage: files Nov 12 17:49:11.824498 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:49:11.824498 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:49:11.824498 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:49:11.827862 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:49:11.827862 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:49:11.827862 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:49:11.827862 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:49:11.827862 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:49:11.827862 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:49:11.827862 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:49:11.826742 unknown[947]: wrote ssh authorized keys file for user: core Nov 12 17:49:11.933127 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 17:49:12.194146 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:49:12.194146 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 17:49:12.197764 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 12 17:49:12.509991 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 17:49:12.569264 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:49:12.571223 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 17:49:12.684162 systemd-networkd[767]: eth0: Gained IPv6LL Nov 12 17:49:12.806884 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 17:49:13.081437 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 17:49:13.083853 ignition[947]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 17:49:13.104514 ignition[947]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:49:13.108365 ignition[947]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:49:13.111114 ignition[947]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 17:49:13.111114 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:49:13.111114 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:49:13.111114 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:49:13.111114 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:49:13.111114 ignition[947]: INFO : files: files passed Nov 12 17:49:13.111114 ignition[947]: INFO : Ignition finished successfully Nov 12 17:49:13.111459 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:49:13.126215 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:49:13.128947 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:49:13.130232 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:49:13.130312 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:49:13.136130 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 17:49:13.138953 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:49:13.138953 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:49:13.142662 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:49:13.143320 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:49:13.145397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:49:13.168927 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:49:13.186370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:49:13.186467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:49:13.188701 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:49:13.190569 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:49:13.192331 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:49:13.193048 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:49:13.208008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:49:13.210355 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:49:13.220579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:49:13.221818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:49:13.223858 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:49:13.225611 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:49:13.225725 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:49:13.228167 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:49:13.230136 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:49:13.231707 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:49:13.233402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:49:13.235295 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:49:13.237201 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:49:13.238978 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:49:13.240882 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:49:13.242862 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:49:13.244608 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:49:13.246101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:49:13.246216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:49:13.248590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:49:13.250514 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:49:13.252388 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:49:13.252488 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:49:13.254432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:49:13.254558 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:49:13.257283 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:49:13.257403 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:49:13.259320 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:49:13.260859 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:49:13.260950 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:49:13.262871 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:49:13.264623 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:49:13.266233 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:49:13.266320 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:49:13.267960 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:49:13.268040 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:49:13.270069 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:49:13.270175 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:49:13.271873 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:49:13.271972 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:49:13.283949 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:49:13.285474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:49:13.286445 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:49:13.286595 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:49:13.288497 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:49:13.288615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:49:13.295305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:49:13.296089 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:49:13.298651 ignition[1002]: INFO : Ignition 2.19.0 Nov 12 17:49:13.298651 ignition[1002]: INFO : Stage: umount Nov 12 17:49:13.298651 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:49:13.298651 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:49:13.298651 ignition[1002]: INFO : umount: umount passed Nov 12 17:49:13.298651 ignition[1002]: INFO : Ignition finished successfully Nov 12 17:49:13.299839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:49:13.300263 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:49:13.300341 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:49:13.302694 systemd[1]: Stopped target network.target - Network. Nov 12 17:49:13.304164 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:49:13.304236 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:49:13.305888 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:49:13.305937 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:49:13.307547 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:49:13.307597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:49:13.309237 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:49:13.309287 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:49:13.311166 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:49:13.312900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:49:13.314750 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:49:13.314862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:49:13.316587 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:49:13.316674 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:49:13.324009 systemd-networkd[767]: eth0: DHCPv6 lease lost Nov 12 17:49:13.325913 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:49:13.326049 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:49:13.328637 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:49:13.328765 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:49:13.331219 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:49:13.331290 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:49:13.337903 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:49:13.338757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:49:13.338847 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:49:13.340927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:49:13.340974 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:49:13.342750 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:49:13.342830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:49:13.344595 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:49:13.344641 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:49:13.347128 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:49:13.358965 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:49:13.360006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:49:13.368481 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:49:13.369575 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:49:13.371070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:49:13.371108 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:49:13.373008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:49:13.373039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:49:13.374761 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:49:13.374823 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:49:13.377465 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:49:13.377510 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:49:13.380197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:49:13.380243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:49:13.395982 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:49:13.397005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:49:13.397066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:49:13.399137 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 17:49:13.399182 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:49:13.401112 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:49:13.401156 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:49:13.403257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:49:13.403303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:49:13.405422 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:49:13.405501 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:49:13.407852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:49:13.409916 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:49:13.418819 systemd[1]: Switching root. Nov 12 17:49:13.442670 systemd-journald[237]: Journal stopped Nov 12 17:49:14.145311 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Nov 12 17:49:14.145365 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:49:14.145381 kernel: SELinux: policy capability open_perms=1 Nov 12 17:49:14.145390 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:49:14.145400 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:49:14.145410 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:49:14.145422 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:49:14.145432 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:49:14.145443 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:49:14.145452 kernel: audit: type=1403 audit(1731433753.599:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:49:14.145463 systemd[1]: Successfully loaded SELinux policy in 30.750ms. Nov 12 17:49:14.145483 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.189ms. Nov 12 17:49:14.145496 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:49:14.145507 systemd[1]: Detected virtualization kvm. Nov 12 17:49:14.145518 systemd[1]: Detected architecture arm64. Nov 12 17:49:14.145539 systemd[1]: Detected first boot. Nov 12 17:49:14.145553 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:49:14.145563 zram_generator::config[1047]: No configuration found. Nov 12 17:49:14.145575 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:49:14.145588 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 17:49:14.145599 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 17:49:14.145610 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 17:49:14.145621 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:49:14.145631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:49:14.145642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:49:14.145652 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:49:14.145663 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:49:14.145676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:49:14.145688 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:49:14.145699 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:49:14.145748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:49:14.145767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:49:14.145800 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:49:14.145813 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:49:14.145824 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:49:14.145836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:49:14.145850 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 17:49:14.145862 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:49:14.145873 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 17:49:14.145883 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 17:49:14.145895 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 17:49:14.145906 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:49:14.145917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:49:14.145928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:49:14.145940 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:49:14.145951 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:49:14.145962 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:49:14.145973 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:49:14.145986 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:49:14.145997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:49:14.146008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:49:14.146018 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:49:14.146029 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:49:14.146042 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:49:14.146053 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:49:14.146064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:49:14.146075 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:49:14.146085 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:49:14.146096 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:49:14.146107 systemd[1]: Reached target machines.target - Containers. Nov 12 17:49:14.146118 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:49:14.146129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:49:14.146141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:49:14.146152 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:49:14.146163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:49:14.146174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:49:14.146184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:49:14.146195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:49:14.146206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:49:14.146218 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:49:14.146231 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 17:49:14.146241 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 17:49:14.146252 kernel: fuse: init (API version 7.39) Nov 12 17:49:14.146262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 17:49:14.146273 kernel: loop: module loaded Nov 12 17:49:14.146283 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 17:49:14.146294 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:49:14.146304 kernel: ACPI: bus type drm_connector registered Nov 12 17:49:14.146314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:49:14.146327 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:49:14.146338 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:49:14.146348 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:49:14.146359 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 17:49:14.146370 systemd[1]: Stopped verity-setup.service. Nov 12 17:49:14.146401 systemd-journald[1115]: Collecting audit messages is disabled. Nov 12 17:49:14.146423 systemd-journald[1115]: Journal started Nov 12 17:49:14.146446 systemd-journald[1115]: Runtime Journal (/run/log/journal/b7238c766ec447ccb9547348335f69bb) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:49:13.940910 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:49:13.955195 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 17:49:13.955523 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 17:49:14.148408 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:49:14.149143 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:49:14.150300 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:49:14.151502 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:49:14.152616 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:49:14.153860 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:49:14.155085 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:49:14.157841 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:49:14.159188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:49:14.160656 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:49:14.160808 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:49:14.162243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:49:14.162384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:49:14.163741 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:49:14.163900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:49:14.165288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:49:14.165419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:49:14.166885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:49:14.167019 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:49:14.168295 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:49:14.168425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:49:14.170283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:49:14.171614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:49:14.173210 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:49:14.184741 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:49:14.198974 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:49:14.201047 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:49:14.202117 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:49:14.202156 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:49:14.204046 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:49:14.206231 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:49:14.208283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:49:14.209400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:49:14.210716 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:49:14.214728 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:49:14.215939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:49:14.219757 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:49:14.220340 systemd-journald[1115]: Time spent on flushing to /var/log/journal/b7238c766ec447ccb9547348335f69bb is 24.834ms for 859 entries. Nov 12 17:49:14.220340 systemd-journald[1115]: System Journal (/var/log/journal/b7238c766ec447ccb9547348335f69bb) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:49:14.262594 systemd-journald[1115]: Received client request to flush runtime journal. Nov 12 17:49:14.262647 kernel: loop0: detected capacity change from 0 to 194512 Nov 12 17:49:14.222188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:49:14.223261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:49:14.225378 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:49:14.227749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:49:14.231312 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:49:14.232760 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:49:14.236168 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:49:14.238823 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:49:14.242242 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:49:14.246172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:49:14.259657 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:49:14.270992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:49:14.263693 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:49:14.268210 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:49:14.269864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:49:14.277410 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 17:49:14.282932 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:49:14.285792 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:49:14.292345 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Nov 12 17:49:14.292363 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Nov 12 17:49:14.296461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:49:14.300886 kernel: loop1: detected capacity change from 0 to 114328 Nov 12 17:49:14.305023 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:49:14.331819 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:49:14.336634 kernel: loop2: detected capacity change from 0 to 114432 Nov 12 17:49:14.340957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:49:14.353937 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 12 17:49:14.353955 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 12 17:49:14.357702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:49:14.386808 kernel: loop3: detected capacity change from 0 to 194512 Nov 12 17:49:14.392827 kernel: loop4: detected capacity change from 0 to 114328 Nov 12 17:49:14.398794 kernel: loop5: detected capacity change from 0 to 114432 Nov 12 17:49:14.401852 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 17:49:14.402277 (sd-merge)[1187]: Merged extensions into '/usr'. Nov 12 17:49:14.405583 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:49:14.405602 systemd[1]: Reloading... Nov 12 17:49:14.454817 zram_generator::config[1211]: No configuration found. Nov 12 17:49:14.504320 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:49:14.547947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:49:14.583454 systemd[1]: Reloading finished in 177 ms. Nov 12 17:49:14.610085 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:49:14.611496 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:49:14.629095 systemd[1]: Starting ensure-sysext.service... Nov 12 17:49:14.630912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:49:14.639736 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:49:14.639753 systemd[1]: Reloading... Nov 12 17:49:14.647702 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:49:14.648055 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:49:14.648715 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:49:14.648951 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 12 17:49:14.649006 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 12 17:49:14.652244 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:49:14.652257 systemd-tmpfiles[1248]: Skipping /boot Nov 12 17:49:14.659223 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:49:14.659237 systemd-tmpfiles[1248]: Skipping /boot Nov 12 17:49:14.685807 zram_generator::config[1275]: No configuration found. Nov 12 17:49:14.764304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:49:14.799194 systemd[1]: Reloading finished in 159 ms. Nov 12 17:49:14.812643 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:49:14.826177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:49:14.834078 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:49:14.837148 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:49:14.839303 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:49:14.846027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:49:14.851133 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:49:14.855982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:49:14.859403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:49:14.860566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:49:14.862634 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:49:14.867757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:49:14.869024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:49:14.873673 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:49:14.876817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:49:14.878470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:49:14.878602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:49:14.880370 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:49:14.880495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:49:14.889171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:49:14.891401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:49:14.893359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:49:14.902000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:49:14.902573 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Nov 12 17:49:14.907015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:49:14.908241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:49:14.911077 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:49:14.912880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:49:14.916379 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:49:14.920155 augenrules[1343]: No rules Nov 12 17:49:14.920204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:49:14.920324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:49:14.922210 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:49:14.923869 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:49:14.923989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:49:14.925538 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:49:14.929710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:49:14.932853 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:49:14.950110 systemd[1]: Finished ensure-sysext.service. Nov 12 17:49:14.958231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:49:14.967935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:49:14.973807 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1372) Nov 12 17:49:14.973866 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1372) Nov 12 17:49:14.971843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:49:14.976689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:49:14.979250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:49:14.981041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:49:14.982853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:49:14.985999 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 17:49:14.987091 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:49:14.987579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:49:14.989405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:49:14.990815 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:49:14.990939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:49:14.992249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:49:14.992379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:49:14.994283 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:49:14.995931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:49:15.001477 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 12 17:49:15.002872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:49:15.002932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:49:15.007674 systemd-resolved[1316]: Positive Trust Anchors: Nov 12 17:49:15.009887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1365) Nov 12 17:49:15.007855 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:49:15.007888 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:49:15.015605 systemd-resolved[1316]: Defaulting to hostname 'linux'. Nov 12 17:49:15.021114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:49:15.022609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:49:15.055304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:49:15.059918 systemd-networkd[1384]: lo: Link UP Nov 12 17:49:15.059930 systemd-networkd[1384]: lo: Gained carrier Nov 12 17:49:15.060814 systemd-networkd[1384]: Enumeration completed Nov 12 17:49:15.061956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:49:15.062342 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:49:15.062350 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:49:15.063571 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 17:49:15.065015 systemd-networkd[1384]: eth0: Link UP Nov 12 17:49:15.065024 systemd-networkd[1384]: eth0: Gained carrier Nov 12 17:49:15.065037 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:49:15.065170 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:49:15.066401 systemd[1]: Reached target network.target - Network. Nov 12 17:49:15.067402 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:49:15.069685 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:49:15.081734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:49:15.086000 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:49:15.086697 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Nov 12 17:49:15.087313 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 17:49:15.087373 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2024-11-12 17:49:15.371194 UTC. Nov 12 17:49:15.109028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:49:15.116936 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:49:15.121200 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:49:15.144390 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:49:15.161849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:49:15.185692 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:49:15.188324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:49:15.189473 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:49:15.190625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:49:15.191840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:49:15.193173 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:49:15.194447 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:49:15.195655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:49:15.196858 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:49:15.196894 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:49:15.197749 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:49:15.199404 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:49:15.201657 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:49:15.206471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:49:15.208734 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:49:15.210345 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:49:15.211486 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:49:15.212429 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:49:15.213366 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:49:15.213400 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:49:15.214288 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:49:15.216309 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:49:15.218929 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:49:15.220863 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:49:15.225029 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:49:15.227491 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:49:15.228556 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:49:15.230950 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:49:15.233750 jq[1417]: false Nov 12 17:49:15.235897 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:49:15.238372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:49:15.242653 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:49:15.246506 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:49:15.246964 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:49:15.250878 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:49:15.253886 dbus-daemon[1416]: [system] SELinux support is enabled Nov 12 17:49:15.257203 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:49:15.259267 extend-filesystems[1418]: Found loop3 Nov 12 17:49:15.259976 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:49:15.260311 extend-filesystems[1418]: Found loop4 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found loop5 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda1 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda2 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda3 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found usr Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda4 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda6 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda7 Nov 12 17:49:15.262371 extend-filesystems[1418]: Found vda9 Nov 12 17:49:15.262371 extend-filesystems[1418]: Checking size of /dev/vda9 Nov 12 17:49:15.263409 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:49:15.275734 jq[1432]: true Nov 12 17:49:15.267499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:49:15.270884 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:49:15.271200 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:49:15.271343 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:49:15.273943 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:49:15.274086 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:49:15.288967 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:49:15.289016 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:49:15.292080 extend-filesystems[1418]: Resized partition /dev/vda9 Nov 12 17:49:15.292132 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:49:15.292162 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:49:15.300764 update_engine[1430]: I20241112 17:49:15.299513 1430 main.cc:92] Flatcar Update Engine starting Nov 12 17:49:15.310164 tar[1437]: linux-arm64/helm Nov 12 17:49:15.310380 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:49:15.312538 jq[1438]: true Nov 12 17:49:15.312618 update_engine[1430]: I20241112 17:49:15.310143 1430 update_check_scheduler.cc:74] Next update check in 5m1s Nov 12 17:49:15.312020 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:49:15.312920 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Nov 12 17:49:15.312942 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 17:49:15.312088 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:49:15.314054 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:49:15.314292 systemd-logind[1426]: New seat seat0. Nov 12 17:49:15.315995 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:49:15.329113 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:49:15.347798 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 17:49:15.362796 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 17:49:15.362796 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:49:15.362796 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 17:49:15.369636 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Nov 12 17:49:15.371072 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:49:15.371337 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:49:15.392312 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:49:15.397232 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:49:15.399229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:49:15.401236 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 17:49:15.517030 containerd[1440]: time="2024-11-12T17:49:15.516946360Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:49:15.551175 containerd[1440]: time="2024-11-12T17:49:15.551076040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553396 containerd[1440]: time="2024-11-12T17:49:15.553243000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553396 containerd[1440]: time="2024-11-12T17:49:15.553274880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:49:15.553396 containerd[1440]: time="2024-11-12T17:49:15.553290040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:49:15.553507 containerd[1440]: time="2024-11-12T17:49:15.553440600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:49:15.553507 containerd[1440]: time="2024-11-12T17:49:15.553458080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553564 containerd[1440]: time="2024-11-12T17:49:15.553508840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553564 containerd[1440]: time="2024-11-12T17:49:15.553532040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553757 containerd[1440]: time="2024-11-12T17:49:15.553707200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553757 containerd[1440]: time="2024-11-12T17:49:15.553726680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553757 containerd[1440]: time="2024-11-12T17:49:15.553745280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553757 containerd[1440]: time="2024-11-12T17:49:15.553754480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.553878 containerd[1440]: time="2024-11-12T17:49:15.553853000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.554063 containerd[1440]: time="2024-11-12T17:49:15.554042880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:49:15.554165 containerd[1440]: time="2024-11-12T17:49:15.554147040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:49:15.554165 containerd[1440]: time="2024-11-12T17:49:15.554164520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:49:15.554249 containerd[1440]: time="2024-11-12T17:49:15.554235440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:49:15.554303 containerd[1440]: time="2024-11-12T17:49:15.554287160Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:49:15.558321 containerd[1440]: time="2024-11-12T17:49:15.558292320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:49:15.558385 containerd[1440]: time="2024-11-12T17:49:15.558345920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:49:15.558385 containerd[1440]: time="2024-11-12T17:49:15.558360720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:49:15.558385 containerd[1440]: time="2024-11-12T17:49:15.558377440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:49:15.558440 containerd[1440]: time="2024-11-12T17:49:15.558392760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:49:15.558574 containerd[1440]: time="2024-11-12T17:49:15.558541960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:49:15.558896 containerd[1440]: time="2024-11-12T17:49:15.558864960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559135200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559159800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559173000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559187200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559199720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559213760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559232280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559248320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559261240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559283520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559296880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559317320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559330920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561286 containerd[1440]: time="2024-11-12T17:49:15.559343280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559355840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559368200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559380440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559391800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559404400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559416440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559430680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559442080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559453000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559464960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559479800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559501200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559512520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561588 containerd[1440]: time="2024-11-12T17:49:15.559532000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559640600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559656040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559666920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559678400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559687960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559701960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559711360Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:49:15.561839 containerd[1440]: time="2024-11-12T17:49:15.559721000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:49:15.561975 containerd[1440]: time="2024-11-12T17:49:15.560055280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:49:15.561975 containerd[1440]: time="2024-11-12T17:49:15.560110600Z" level=info msg="Connect containerd service" Nov 12 17:49:15.561975 containerd[1440]: time="2024-11-12T17:49:15.560212440Z" level=info msg="using legacy CRI server" Nov 12 17:49:15.561975 containerd[1440]: time="2024-11-12T17:49:15.560220280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:49:15.561975 containerd[1440]: time="2024-11-12T17:49:15.560299000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:49:15.561975 containerd[1440]: time="2024-11-12T17:49:15.560962800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:49:15.562331 containerd[1440]: time="2024-11-12T17:49:15.562291360Z" level=info msg="Start subscribing containerd event" Nov 12 17:49:15.562410 containerd[1440]: time="2024-11-12T17:49:15.562396600Z" level=info msg="Start recovering state" Nov 12 17:49:15.562515 containerd[1440]: time="2024-11-12T17:49:15.562500280Z" level=info msg="Start event monitor" Nov 12 17:49:15.562590 containerd[1440]: time="2024-11-12T17:49:15.562576280Z" level=info msg="Start snapshots syncer" Nov 12 17:49:15.562639 containerd[1440]: time="2024-11-12T17:49:15.562627880Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:49:15.562686 containerd[1440]: time="2024-11-12T17:49:15.562674320Z" level=info msg="Start streaming server" Nov 12 17:49:15.563849 containerd[1440]: time="2024-11-12T17:49:15.563822680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:49:15.563998 containerd[1440]: time="2024-11-12T17:49:15.563983320Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:49:15.564145 containerd[1440]: time="2024-11-12T17:49:15.564131240Z" level=info msg="containerd successfully booted in 0.048764s" Nov 12 17:49:15.564223 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:49:15.677921 tar[1437]: linux-arm64/LICENSE Nov 12 17:49:15.678018 tar[1437]: linux-arm64/README.md Nov 12 17:49:15.684059 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:49:15.689902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:49:15.703393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:49:15.713010 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:49:15.718160 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:49:15.718344 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:49:15.721955 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:49:15.733896 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:49:15.738052 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:49:15.741014 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 17:49:15.742320 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:49:16.140256 systemd-networkd[1384]: eth0: Gained IPv6LL Nov 12 17:49:16.142918 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:49:16.144737 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:49:16.157315 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 17:49:16.159664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:16.161758 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:49:16.176714 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 17:49:16.176912 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 17:49:16.178552 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:49:16.189882 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:49:16.765807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:16.767357 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:49:16.770457 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:49:16.772931 systemd[1]: Startup finished in 561ms (kernel) + 4.895s (initrd) + 3.207s (userspace) = 8.664s. Nov 12 17:49:17.306976 kubelet[1529]: E1112 17:49:17.306886 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:49:17.309673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:49:17.309854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:49:21.680475 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:49:21.681583 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:55890.service - OpenSSH per-connection server daemon (10.0.0.1:55890). Nov 12 17:49:21.734809 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 55890 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:21.736303 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:21.751148 systemd-logind[1426]: New session 1 of user core. Nov 12 17:49:21.752239 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:49:21.763064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:49:21.771239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:49:21.774307 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:49:21.780315 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:49:21.849609 systemd[1547]: Queued start job for default target default.target. Nov 12 17:49:21.860713 systemd[1547]: Created slice app.slice - User Application Slice. Nov 12 17:49:21.860754 systemd[1547]: Reached target paths.target - Paths. Nov 12 17:49:21.860765 systemd[1547]: Reached target timers.target - Timers. Nov 12 17:49:21.861953 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:49:21.870387 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:49:21.870446 systemd[1547]: Reached target sockets.target - Sockets. Nov 12 17:49:21.870458 systemd[1547]: Reached target basic.target - Basic System. Nov 12 17:49:21.870492 systemd[1547]: Reached target default.target - Main User Target. Nov 12 17:49:21.870515 systemd[1547]: Startup finished in 85ms. Nov 12 17:49:21.870750 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:49:21.872454 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:49:21.938958 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:55898.service - OpenSSH per-connection server daemon (10.0.0.1:55898). Nov 12 17:49:21.970461 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 55898 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:21.971642 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:21.975316 systemd-logind[1426]: New session 2 of user core. Nov 12 17:49:21.987960 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:49:22.038949 sshd[1558]: pam_unix(sshd:session): session closed for user core Nov 12 17:49:22.049051 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:55898.service: Deactivated successfully. Nov 12 17:49:22.050451 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:49:22.051720 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:49:22.052841 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:55904.service - OpenSSH per-connection server daemon (10.0.0.1:55904). Nov 12 17:49:22.053632 systemd-logind[1426]: Removed session 2. Nov 12 17:49:22.084249 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 55904 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:22.085387 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:22.089292 systemd-logind[1426]: New session 3 of user core. Nov 12 17:49:22.102921 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:49:22.151455 sshd[1565]: pam_unix(sshd:session): session closed for user core Nov 12 17:49:22.165045 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:55904.service: Deactivated successfully. Nov 12 17:49:22.166367 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:49:22.168880 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:49:22.169977 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:55912.service - OpenSSH per-connection server daemon (10.0.0.1:55912). Nov 12 17:49:22.170696 systemd-logind[1426]: Removed session 3. Nov 12 17:49:22.200679 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 55912 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:22.202043 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:22.205827 systemd-logind[1426]: New session 4 of user core. Nov 12 17:49:22.215005 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:49:22.266187 sshd[1572]: pam_unix(sshd:session): session closed for user core Nov 12 17:49:22.277053 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:55912.service: Deactivated successfully. Nov 12 17:49:22.278386 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:49:22.280854 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:49:22.289145 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:55924.service - OpenSSH per-connection server daemon (10.0.0.1:55924). Nov 12 17:49:22.292005 systemd-logind[1426]: Removed session 4. Nov 12 17:49:22.317927 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 55924 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:22.319112 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:22.324177 systemd-logind[1426]: New session 5 of user core. Nov 12 17:49:22.332958 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:49:22.400638 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:49:22.400944 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:49:22.412625 sudo[1582]: pam_unix(sudo:session): session closed for user root Nov 12 17:49:22.414364 sshd[1579]: pam_unix(sshd:session): session closed for user core Nov 12 17:49:22.429199 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:55924.service: Deactivated successfully. Nov 12 17:49:22.430697 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:49:22.433940 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:49:22.435133 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:40614.service - OpenSSH per-connection server daemon (10.0.0.1:40614). Nov 12 17:49:22.436159 systemd-logind[1426]: Removed session 5. Nov 12 17:49:22.467360 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 40614 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:22.469050 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:22.473916 systemd-logind[1426]: New session 6 of user core. Nov 12 17:49:22.482047 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:49:22.532686 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:49:22.533312 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:49:22.539605 sudo[1591]: pam_unix(sudo:session): session closed for user root Nov 12 17:49:22.544070 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:49:22.544312 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:49:22.563059 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:49:22.564527 auditctl[1594]: No rules Nov 12 17:49:22.564693 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:49:22.564880 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:49:22.566826 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:49:22.590179 augenrules[1612]: No rules Nov 12 17:49:22.591228 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:49:22.593229 sudo[1590]: pam_unix(sudo:session): session closed for user root Nov 12 17:49:22.594643 sshd[1587]: pam_unix(sshd:session): session closed for user core Nov 12 17:49:22.607948 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:40614.service: Deactivated successfully. Nov 12 17:49:22.609297 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:49:22.612122 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:49:22.613464 systemd-logind[1426]: Removed session 6. Nov 12 17:49:22.614994 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:40620.service - OpenSSH per-connection server daemon (10.0.0.1:40620). Nov 12 17:49:22.650970 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:49:22.651404 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:49:22.655179 systemd-logind[1426]: New session 7 of user core. Nov 12 17:49:22.668936 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:49:22.719372 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:49:22.722282 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:49:23.080128 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:49:23.080165 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:49:23.352465 dockerd[1642]: time="2024-11-12T17:49:23.352112224Z" level=info msg="Starting up" Nov 12 17:49:23.441381 dockerd[1642]: time="2024-11-12T17:49:23.441343892Z" level=info msg="Loading containers: start." Nov 12 17:49:23.555833 kernel: Initializing XFRM netlink socket Nov 12 17:49:23.622391 systemd-networkd[1384]: docker0: Link UP Nov 12 17:49:23.642230 dockerd[1642]: time="2024-11-12T17:49:23.642140606Z" level=info msg="Loading containers: done." Nov 12 17:49:23.657100 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2461182226-merged.mount: Deactivated successfully. Nov 12 17:49:23.660001 dockerd[1642]: time="2024-11-12T17:49:23.659909345Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:49:23.660449 dockerd[1642]: time="2024-11-12T17:49:23.660120326Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:49:23.660449 dockerd[1642]: time="2024-11-12T17:49:23.660264549Z" level=info msg="Daemon has completed initialization" Nov 12 17:49:23.690805 dockerd[1642]: time="2024-11-12T17:49:23.690664958Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:49:23.690905 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:49:24.307990 containerd[1440]: time="2024-11-12T17:49:24.307950629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 17:49:25.026593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82600382.mount: Deactivated successfully. Nov 12 17:49:26.311763 containerd[1440]: time="2024-11-12T17:49:26.311074404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:26.311763 containerd[1440]: time="2024-11-12T17:49:26.311730757Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201617" Nov 12 17:49:26.312892 containerd[1440]: time="2024-11-12T17:49:26.312858108Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:26.316161 containerd[1440]: time="2024-11-12T17:49:26.316121503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:26.317886 containerd[1440]: time="2024-11-12T17:49:26.317851752Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 2.009849594s" Nov 12 17:49:26.317965 containerd[1440]: time="2024-11-12T17:49:26.317889744Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 17:49:26.336911 containerd[1440]: time="2024-11-12T17:49:26.336712301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 17:49:27.560253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:49:27.569959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:27.654712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:27.658131 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:49:27.704862 kubelet[1865]: E1112 17:49:27.704779 1865 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:49:27.707749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:49:27.707900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:49:28.299434 containerd[1440]: time="2024-11-12T17:49:28.299386272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:28.300029 containerd[1440]: time="2024-11-12T17:49:28.299999022Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381046" Nov 12 17:49:28.302664 containerd[1440]: time="2024-11-12T17:49:28.301003916Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:28.305293 containerd[1440]: time="2024-11-12T17:49:28.305250303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:28.306406 containerd[1440]: time="2024-11-12T17:49:28.306280578Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 1.969522077s" Nov 12 17:49:28.306406 containerd[1440]: time="2024-11-12T17:49:28.306317721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 17:49:28.324564 containerd[1440]: time="2024-11-12T17:49:28.324535457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 17:49:29.348792 containerd[1440]: time="2024-11-12T17:49:29.348725986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:29.349236 containerd[1440]: time="2024-11-12T17:49:29.349188099Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770290" Nov 12 17:49:29.350100 containerd[1440]: time="2024-11-12T17:49:29.350037098Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:29.353060 containerd[1440]: time="2024-11-12T17:49:29.353025438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:29.354309 containerd[1440]: time="2024-11-12T17:49:29.354270580Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 1.029699478s" Nov 12 17:49:29.354352 containerd[1440]: time="2024-11-12T17:49:29.354310911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 17:49:29.373701 containerd[1440]: time="2024-11-12T17:49:29.373623668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 17:49:30.472440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657025214.mount: Deactivated successfully. Nov 12 17:49:30.814702 containerd[1440]: time="2024-11-12T17:49:30.814561070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:30.815331 containerd[1440]: time="2024-11-12T17:49:30.815279663Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272231" Nov 12 17:49:30.816305 containerd[1440]: time="2024-11-12T17:49:30.816270980Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:30.818879 containerd[1440]: time="2024-11-12T17:49:30.818836106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:30.819489 containerd[1440]: time="2024-11-12T17:49:30.819447475Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 1.445631783s" Nov 12 17:49:30.819519 containerd[1440]: time="2024-11-12T17:49:30.819486125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 17:49:30.837821 containerd[1440]: time="2024-11-12T17:49:30.837787247Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:49:31.439237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652119641.mount: Deactivated successfully. Nov 12 17:49:32.013321 containerd[1440]: time="2024-11-12T17:49:32.013258757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:32.014047 containerd[1440]: time="2024-11-12T17:49:32.014017484Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 17:49:32.014773 containerd[1440]: time="2024-11-12T17:49:32.014722507Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:32.018398 containerd[1440]: time="2024-11-12T17:49:32.018358402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:32.020619 containerd[1440]: time="2024-11-12T17:49:32.020572122Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.182746317s" Nov 12 17:49:32.020619 containerd[1440]: time="2024-11-12T17:49:32.020613294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:49:32.039355 containerd[1440]: time="2024-11-12T17:49:32.039321217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:49:32.438245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110416388.mount: Deactivated successfully. Nov 12 17:49:32.442817 containerd[1440]: time="2024-11-12T17:49:32.442253462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:32.443013 containerd[1440]: time="2024-11-12T17:49:32.442957159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Nov 12 17:49:32.443849 containerd[1440]: time="2024-11-12T17:49:32.443807950Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:32.448684 containerd[1440]: time="2024-11-12T17:49:32.448640721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:32.450286 containerd[1440]: time="2024-11-12T17:49:32.450242768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 410.883231ms" Nov 12 17:49:32.450336 containerd[1440]: time="2024-11-12T17:49:32.450283337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:49:32.467678 containerd[1440]: time="2024-11-12T17:49:32.467642471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 17:49:33.063525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251990562.mount: Deactivated successfully. Nov 12 17:49:35.084888 containerd[1440]: time="2024-11-12T17:49:35.084835094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:35.085418 containerd[1440]: time="2024-11-12T17:49:35.085382427Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Nov 12 17:49:35.086218 containerd[1440]: time="2024-11-12T17:49:35.086186037Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:35.090226 containerd[1440]: time="2024-11-12T17:49:35.090191572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:49:35.091488 containerd[1440]: time="2024-11-12T17:49:35.091364617Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.623691589s" Nov 12 17:49:35.091488 containerd[1440]: time="2024-11-12T17:49:35.091397669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 17:49:37.959013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:49:37.973112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:38.078899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:38.082310 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:49:38.120564 kubelet[2097]: E1112 17:49:38.120472 2097 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:49:38.123536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:49:38.123772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:49:39.939288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:39.949231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:39.963565 systemd[1]: Reloading requested from client PID 2113 ('systemctl') (unit session-7.scope)... Nov 12 17:49:39.963579 systemd[1]: Reloading... Nov 12 17:49:40.032910 zram_generator::config[2155]: No configuration found. Nov 12 17:49:40.182492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:49:40.233748 systemd[1]: Reloading finished in 269 ms. Nov 12 17:49:40.280344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:40.283538 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:49:40.283735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:40.285077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:40.420604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:40.424619 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:49:40.464928 kubelet[2199]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:49:40.464928 kubelet[2199]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:49:40.464928 kubelet[2199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:49:40.465284 kubelet[2199]: I1112 17:49:40.464971 2199 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:49:40.801702 kubelet[2199]: I1112 17:49:40.801658 2199 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:49:40.801702 kubelet[2199]: I1112 17:49:40.801688 2199 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:49:40.801984 kubelet[2199]: I1112 17:49:40.801950 2199 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:49:40.826977 kubelet[2199]: I1112 17:49:40.826668 2199 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:49:40.826977 kubelet[2199]: E1112 17:49:40.826915 2199 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.834577 kubelet[2199]: I1112 17:49:40.834549 2199 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:49:40.835561 kubelet[2199]: I1112 17:49:40.835527 2199 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:49:40.835734 kubelet[2199]: I1112 17:49:40.835710 2199 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:49:40.835734 kubelet[2199]: I1112 17:49:40.835734 2199 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:49:40.835842 kubelet[2199]: I1112 17:49:40.835743 2199 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:49:40.836869 kubelet[2199]: I1112 17:49:40.836845 2199 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:49:40.840862 kubelet[2199]: I1112 17:49:40.840844 2199 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:49:40.840907 kubelet[2199]: I1112 17:49:40.840868 2199 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:49:40.840907 kubelet[2199]: I1112 17:49:40.840889 2199 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:49:40.840907 kubelet[2199]: I1112 17:49:40.840902 2199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:49:40.841849 kubelet[2199]: W1112 17:49:40.841519 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.841849 kubelet[2199]: E1112 17:49:40.841566 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.841849 kubelet[2199]: W1112 17:49:40.841799 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.841849 kubelet[2199]: E1112 17:49:40.841833 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.842691 kubelet[2199]: I1112 17:49:40.842673 2199 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:49:40.843131 kubelet[2199]: I1112 17:49:40.843118 2199 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:49:40.843702 kubelet[2199]: W1112 17:49:40.843669 2199 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:49:40.844580 kubelet[2199]: I1112 17:49:40.844562 2199 server.go:1256] "Started kubelet" Nov 12 17:49:40.845062 kubelet[2199]: I1112 17:49:40.844807 2199 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:49:40.850109 kubelet[2199]: I1112 17:49:40.849292 2199 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:49:40.850848 kubelet[2199]: I1112 17:49:40.850546 2199 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:49:40.850848 kubelet[2199]: I1112 17:49:40.850806 2199 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:49:40.853078 kubelet[2199]: E1112 17:49:40.853030 2199 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180749d9a359a375 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 17:49:40.844536693 +0000 UTC m=+0.416587977,LastTimestamp:2024-11-12 17:49:40.844536693 +0000 UTC m=+0.416587977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 17:49:40.853279 kubelet[2199]: I1112 17:49:40.853248 2199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:49:40.854256 kubelet[2199]: I1112 17:49:40.854234 2199 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:49:40.855987 kubelet[2199]: I1112 17:49:40.854110 2199 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:49:40.856448 kubelet[2199]: I1112 17:49:40.856303 2199 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:49:40.856518 kubelet[2199]: W1112 17:49:40.856470 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.856518 kubelet[2199]: E1112 17:49:40.856518 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.856706 kubelet[2199]: E1112 17:49:40.856684 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Nov 12 17:49:40.857507 kubelet[2199]: I1112 17:49:40.857468 2199 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:49:40.858349 kubelet[2199]: E1112 17:49:40.858320 2199 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:49:40.858457 kubelet[2199]: I1112 17:49:40.858415 2199 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:49:40.858457 kubelet[2199]: I1112 17:49:40.858433 2199 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:49:40.870224 kubelet[2199]: I1112 17:49:40.870133 2199 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:49:40.870224 kubelet[2199]: I1112 17:49:40.870191 2199 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:49:40.870224 kubelet[2199]: I1112 17:49:40.870209 2199 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:49:40.873734 kubelet[2199]: I1112 17:49:40.873501 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:49:40.874857 kubelet[2199]: I1112 17:49:40.874541 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:49:40.874857 kubelet[2199]: I1112 17:49:40.874563 2199 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:49:40.874857 kubelet[2199]: I1112 17:49:40.874578 2199 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:49:40.874857 kubelet[2199]: E1112 17:49:40.874634 2199 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:49:40.875042 kubelet[2199]: W1112 17:49:40.875002 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.875075 kubelet[2199]: E1112 17:49:40.875047 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:40.957852 kubelet[2199]: I1112 17:49:40.957209 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:49:40.957952 kubelet[2199]: E1112 17:49:40.957941 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Nov 12 17:49:40.975152 kubelet[2199]: E1112 17:49:40.975106 2199 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 17:49:41.057663 kubelet[2199]: E1112 17:49:41.057532 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Nov 12 17:49:41.062900 kubelet[2199]: I1112 17:49:41.062458 2199 policy_none.go:49] "None policy: Start" Nov 12 17:49:41.063335 kubelet[2199]: I1112 17:49:41.063307 2199 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:49:41.063366 kubelet[2199]: I1112 17:49:41.063353 2199 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:49:41.070148 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 17:49:41.084541 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 17:49:41.088699 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 17:49:41.100296 kubelet[2199]: I1112 17:49:41.100268 2199 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:49:41.100561 kubelet[2199]: I1112 17:49:41.100545 2199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:49:41.102133 kubelet[2199]: E1112 17:49:41.102101 2199 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 17:49:41.160305 kubelet[2199]: I1112 17:49:41.160261 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:49:41.161285 kubelet[2199]: E1112 17:49:41.160582 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Nov 12 17:49:41.175844 kubelet[2199]: I1112 17:49:41.175738 2199 topology_manager.go:215] "Topology Admit Handler" podUID="7a5668c7b03afc9cb8f0d6aa489dc571" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:49:41.178120 kubelet[2199]: I1112 17:49:41.177304 2199 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:49:41.178470 kubelet[2199]: I1112 17:49:41.178444 2199 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:49:41.185146 systemd[1]: Created slice kubepods-burstable-pod7a5668c7b03afc9cb8f0d6aa489dc571.slice - libcontainer container kubepods-burstable-pod7a5668c7b03afc9cb8f0d6aa489dc571.slice. Nov 12 17:49:41.206668 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 17:49:41.215899 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 17:49:41.259112 kubelet[2199]: I1112 17:49:41.258616 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:49:41.259112 kubelet[2199]: I1112 17:49:41.258660 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a5668c7b03afc9cb8f0d6aa489dc571-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a5668c7b03afc9cb8f0d6aa489dc571\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:41.259112 kubelet[2199]: I1112 17:49:41.258682 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:41.259112 kubelet[2199]: I1112 17:49:41.258701 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:41.259112 kubelet[2199]: I1112 17:49:41.258720 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:41.259320 kubelet[2199]: I1112 17:49:41.258739 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:41.259320 kubelet[2199]: I1112 17:49:41.258756 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a5668c7b03afc9cb8f0d6aa489dc571-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a5668c7b03afc9cb8f0d6aa489dc571\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:41.259320 kubelet[2199]: I1112 17:49:41.258964 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a5668c7b03afc9cb8f0d6aa489dc571-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a5668c7b03afc9cb8f0d6aa489dc571\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:41.259320 kubelet[2199]: I1112 17:49:41.258997 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:41.458132 kubelet[2199]: E1112 17:49:41.458019 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Nov 12 17:49:41.502502 kubelet[2199]: E1112 17:49:41.502467 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:41.503171 containerd[1440]: time="2024-11-12T17:49:41.503137292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a5668c7b03afc9cb8f0d6aa489dc571,Namespace:kube-system,Attempt:0,}" Nov 12 17:49:41.515534 kubelet[2199]: E1112 17:49:41.515346 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:41.515917 containerd[1440]: time="2024-11-12T17:49:41.515840319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 17:49:41.519981 kubelet[2199]: E1112 17:49:41.519883 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:41.520460 containerd[1440]: time="2024-11-12T17:49:41.520215466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 17:49:41.562267 kubelet[2199]: I1112 17:49:41.562242 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:49:41.562749 kubelet[2199]: E1112 17:49:41.562640 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Nov 12 17:49:41.651458 kubelet[2199]: W1112 17:49:41.651324 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:41.651458 kubelet[2199]: E1112 17:49:41.651422 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:41.940365 kubelet[2199]: W1112 17:49:41.940297 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:41.940365 kubelet[2199]: E1112 17:49:41.940340 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:41.946808 kubelet[2199]: W1112 17:49:41.946700 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:41.946808 kubelet[2199]: E1112 17:49:41.946753 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:42.033699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919455941.mount: Deactivated successfully. Nov 12 17:49:42.039361 containerd[1440]: time="2024-11-12T17:49:42.038554008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:49:42.040242 containerd[1440]: time="2024-11-12T17:49:42.040206829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 17:49:42.047822 containerd[1440]: time="2024-11-12T17:49:42.046319882Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:49:42.047822 containerd[1440]: time="2024-11-12T17:49:42.047286466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:49:42.048090 containerd[1440]: time="2024-11-12T17:49:42.048006740Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:49:42.049102 containerd[1440]: time="2024-11-12T17:49:42.049064585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:49:42.049642 containerd[1440]: time="2024-11-12T17:49:42.049427585Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:49:42.052528 containerd[1440]: time="2024-11-12T17:49:42.052471938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:49:42.054123 containerd[1440]: time="2024-11-12T17:49:42.054088959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.871287ms" Nov 12 17:49:42.055667 containerd[1440]: time="2024-11-12T17:49:42.055515810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.23242ms" Nov 12 17:49:42.056446 containerd[1440]: time="2024-11-12T17:49:42.056374556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.454979ms" Nov 12 17:49:42.185113 kubelet[2199]: W1112 17:49:42.185023 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:42.185113 kubelet[2199]: E1112 17:49:42.185070 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.216013465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.216057954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.216073371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.216162710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.215899740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.215961408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.215976585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:49:42.216767 containerd[1440]: time="2024-11-12T17:49:42.216055191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:49:42.217916 containerd[1440]: time="2024-11-12T17:49:42.217701044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:49:42.217916 containerd[1440]: time="2024-11-12T17:49:42.217755984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:49:42.217916 containerd[1440]: time="2024-11-12T17:49:42.217790703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:49:42.217916 containerd[1440]: time="2024-11-12T17:49:42.217873674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:49:42.236961 systemd[1]: Started cri-containerd-b55ee55a9cb57a77f1d6b6a2acf32b628a739fb7f948996d3377faf7718145c4.scope - libcontainer container b55ee55a9cb57a77f1d6b6a2acf32b628a739fb7f948996d3377faf7718145c4. Nov 12 17:49:42.241909 systemd[1]: Started cri-containerd-28682ee4a96f5062ebd951cfb60fdfc7dff0a1a326e51092e2ef01affb4f4664.scope - libcontainer container 28682ee4a96f5062ebd951cfb60fdfc7dff0a1a326e51092e2ef01affb4f4664. Nov 12 17:49:42.243727 systemd[1]: Started cri-containerd-3f1d6d8c320620dc779c897e2a4124720af906b6f9c1b8d9b8b5485afdca2284.scope - libcontainer container 3f1d6d8c320620dc779c897e2a4124720af906b6f9c1b8d9b8b5485afdca2284. Nov 12 17:49:42.259892 kubelet[2199]: E1112 17:49:42.259838 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Nov 12 17:49:42.280834 containerd[1440]: time="2024-11-12T17:49:42.280747524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b55ee55a9cb57a77f1d6b6a2acf32b628a739fb7f948996d3377faf7718145c4\"" Nov 12 17:49:42.281966 kubelet[2199]: E1112 17:49:42.281878 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:42.285047 containerd[1440]: time="2024-11-12T17:49:42.285010980Z" level=info msg="CreateContainer within sandbox \"b55ee55a9cb57a77f1d6b6a2acf32b628a739fb7f948996d3377faf7718145c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:49:42.287131 containerd[1440]: time="2024-11-12T17:49:42.287101883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"28682ee4a96f5062ebd951cfb60fdfc7dff0a1a326e51092e2ef01affb4f4664\"" Nov 12 17:49:42.288807 kubelet[2199]: E1112 17:49:42.288710 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:42.291257 containerd[1440]: time="2024-11-12T17:49:42.290516324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a5668c7b03afc9cb8f0d6aa489dc571,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f1d6d8c320620dc779c897e2a4124720af906b6f9c1b8d9b8b5485afdca2284\"" Nov 12 17:49:42.292150 kubelet[2199]: E1112 17:49:42.292129 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:42.292775 containerd[1440]: time="2024-11-12T17:49:42.292732405Z" level=info msg="CreateContainer within sandbox \"28682ee4a96f5062ebd951cfb60fdfc7dff0a1a326e51092e2ef01affb4f4664\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:49:42.295652 containerd[1440]: time="2024-11-12T17:49:42.295618303Z" level=info msg="CreateContainer within sandbox \"3f1d6d8c320620dc779c897e2a4124720af906b6f9c1b8d9b8b5485afdca2284\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:49:42.299414 containerd[1440]: time="2024-11-12T17:49:42.299364710Z" level=info msg="CreateContainer within sandbox \"b55ee55a9cb57a77f1d6b6a2acf32b628a739fb7f948996d3377faf7718145c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"357a23db1ce5bc0e859728d7de562ca775bf8fa099f0341488dff0b1b0a5ce4e\"" Nov 12 17:49:42.300025 containerd[1440]: time="2024-11-12T17:49:42.299993442Z" level=info msg="StartContainer for \"357a23db1ce5bc0e859728d7de562ca775bf8fa099f0341488dff0b1b0a5ce4e\"" Nov 12 17:49:42.309661 containerd[1440]: time="2024-11-12T17:49:42.309620245Z" level=info msg="CreateContainer within sandbox \"28682ee4a96f5062ebd951cfb60fdfc7dff0a1a326e51092e2ef01affb4f4664\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"553699dd340da11d29d9205fd1f936ab90152cce15b91cd97dc877733d2da312\"" Nov 12 17:49:42.310391 containerd[1440]: time="2024-11-12T17:49:42.310360380Z" level=info msg="StartContainer for \"553699dd340da11d29d9205fd1f936ab90152cce15b91cd97dc877733d2da312\"" Nov 12 17:49:42.324259 containerd[1440]: time="2024-11-12T17:49:42.322960258Z" level=info msg="CreateContainer within sandbox \"3f1d6d8c320620dc779c897e2a4124720af906b6f9c1b8d9b8b5485afdca2284\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"049ffbf27722a8c7f9509acdeed4160d67889c8d6003df2d24fe3420eef2e759\"" Nov 12 17:49:42.324259 containerd[1440]: time="2024-11-12T17:49:42.323451039Z" level=info msg="StartContainer for \"049ffbf27722a8c7f9509acdeed4160d67889c8d6003df2d24fe3420eef2e759\"" Nov 12 17:49:42.325975 systemd[1]: Started cri-containerd-357a23db1ce5bc0e859728d7de562ca775bf8fa099f0341488dff0b1b0a5ce4e.scope - libcontainer container 357a23db1ce5bc0e859728d7de562ca775bf8fa099f0341488dff0b1b0a5ce4e. Nov 12 17:49:42.332191 systemd[1]: Started cri-containerd-553699dd340da11d29d9205fd1f936ab90152cce15b91cd97dc877733d2da312.scope - libcontainer container 553699dd340da11d29d9205fd1f936ab90152cce15b91cd97dc877733d2da312. Nov 12 17:49:42.356037 systemd[1]: Started cri-containerd-049ffbf27722a8c7f9509acdeed4160d67889c8d6003df2d24fe3420eef2e759.scope - libcontainer container 049ffbf27722a8c7f9509acdeed4160d67889c8d6003df2d24fe3420eef2e759. Nov 12 17:49:42.365457 kubelet[2199]: I1112 17:49:42.364745 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:49:42.366239 kubelet[2199]: E1112 17:49:42.366215 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Nov 12 17:49:42.375585 containerd[1440]: time="2024-11-12T17:49:42.375546057Z" level=info msg="StartContainer for \"553699dd340da11d29d9205fd1f936ab90152cce15b91cd97dc877733d2da312\" returns successfully" Nov 12 17:49:42.376003 containerd[1440]: time="2024-11-12T17:49:42.375666990Z" level=info msg="StartContainer for \"357a23db1ce5bc0e859728d7de562ca775bf8fa099f0341488dff0b1b0a5ce4e\" returns successfully" Nov 12 17:49:42.428419 containerd[1440]: time="2024-11-12T17:49:42.428310292Z" level=info msg="StartContainer for \"049ffbf27722a8c7f9509acdeed4160d67889c8d6003df2d24fe3420eef2e759\" returns successfully" Nov 12 17:49:42.886317 kubelet[2199]: E1112 17:49:42.886237 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:42.886317 kubelet[2199]: E1112 17:49:42.886317 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:42.891060 kubelet[2199]: E1112 17:49:42.890982 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:43.894830 kubelet[2199]: E1112 17:49:43.893068 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:43.894830 kubelet[2199]: E1112 17:49:43.893594 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:43.968447 kubelet[2199]: I1112 17:49:43.967911 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:49:44.104045 kubelet[2199]: E1112 17:49:44.103995 2199 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 17:49:44.179397 kubelet[2199]: I1112 17:49:44.179258 2199 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:49:44.843600 kubelet[2199]: I1112 17:49:44.843562 2199 apiserver.go:52] "Watching apiserver" Nov 12 17:49:44.855637 kubelet[2199]: I1112 17:49:44.855603 2199 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:49:46.556907 systemd[1]: Reloading requested from client PID 2480 ('systemctl') (unit session-7.scope)... Nov 12 17:49:46.556921 systemd[1]: Reloading... Nov 12 17:49:46.621870 zram_generator::config[2522]: No configuration found. Nov 12 17:49:46.698046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:49:46.760368 systemd[1]: Reloading finished in 203 ms. Nov 12 17:49:46.793070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:46.810591 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:49:46.810834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:46.818983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:49:46.909072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:49:46.913057 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:49:46.961217 kubelet[2561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:49:46.961217 kubelet[2561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:49:46.961217 kubelet[2561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:49:46.961542 kubelet[2561]: I1112 17:49:46.961266 2561 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:49:46.965394 kubelet[2561]: I1112 17:49:46.965069 2561 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:49:46.965394 kubelet[2561]: I1112 17:49:46.965095 2561 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:49:46.965886 kubelet[2561]: I1112 17:49:46.965863 2561 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:49:46.967819 kubelet[2561]: I1112 17:49:46.967799 2561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:49:46.969843 kubelet[2561]: I1112 17:49:46.969784 2561 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:49:46.978035 kubelet[2561]: I1112 17:49:46.978016 2561 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:49:46.978502 kubelet[2561]: I1112 17:49:46.978193 2561 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:49:46.978502 kubelet[2561]: I1112 17:49:46.978346 2561 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:49:46.978502 kubelet[2561]: I1112 17:49:46.978364 2561 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:49:46.978502 kubelet[2561]: I1112 17:49:46.978372 2561 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:49:46.978502 kubelet[2561]: I1112 17:49:46.978402 2561 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:49:46.978502 kubelet[2561]: I1112 17:49:46.978497 2561 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:49:46.978707 kubelet[2561]: I1112 17:49:46.978510 2561 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:49:46.978707 kubelet[2561]: I1112 17:49:46.978530 2561 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:49:46.978707 kubelet[2561]: I1112 17:49:46.978543 2561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:49:46.980962 kubelet[2561]: I1112 17:49:46.979484 2561 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:49:46.980962 kubelet[2561]: I1112 17:49:46.979650 2561 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:49:46.980962 kubelet[2561]: I1112 17:49:46.979971 2561 server.go:1256] "Started kubelet" Nov 12 17:49:46.981365 kubelet[2561]: I1112 17:49:46.981337 2561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:49:46.982732 kubelet[2561]: I1112 17:49:46.982691 2561 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:49:46.987787 kubelet[2561]: E1112 17:49:46.985406 2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:49:46.987787 kubelet[2561]: I1112 17:49:46.985453 2561 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:49:46.987787 kubelet[2561]: I1112 17:49:46.985550 2561 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:49:46.987787 kubelet[2561]: I1112 17:49:46.985677 2561 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:49:46.987787 kubelet[2561]: I1112 17:49:46.986569 2561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:49:46.987787 kubelet[2561]: I1112 17:49:46.986575 2561 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:49:46.987787 kubelet[2561]: I1112 17:49:46.986774 2561 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:49:46.987835 sudo[2577]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 17:49:46.988096 sudo[2577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 17:49:46.990516 kubelet[2561]: I1112 17:49:46.990486 2561 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:49:46.990598 kubelet[2561]: I1112 17:49:46.990575 2561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:49:46.991693 kubelet[2561]: I1112 17:49:46.991670 2561 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:49:47.014556 kubelet[2561]: I1112 17:49:47.014358 2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:49:47.015865 kubelet[2561]: I1112 17:49:47.015845 2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:49:47.015970 kubelet[2561]: I1112 17:49:47.015957 2561 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:49:47.015999 kubelet[2561]: I1112 17:49:47.015978 2561 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:49:47.016034 kubelet[2561]: E1112 17:49:47.016024 2561 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:49:47.035566 kubelet[2561]: I1112 17:49:47.035542 2561 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:49:47.035566 kubelet[2561]: I1112 17:49:47.035563 2561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:49:47.035660 kubelet[2561]: I1112 17:49:47.035579 2561 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:49:47.035727 kubelet[2561]: I1112 17:49:47.035709 2561 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:49:47.035756 kubelet[2561]: I1112 17:49:47.035732 2561 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:49:47.035756 kubelet[2561]: I1112 17:49:47.035739 2561 policy_none.go:49] "None policy: Start" Nov 12 17:49:47.036426 kubelet[2561]: I1112 17:49:47.036370 2561 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:49:47.036426 kubelet[2561]: I1112 17:49:47.036401 2561 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:49:47.036562 kubelet[2561]: I1112 17:49:47.036544 2561 state_mem.go:75] "Updated machine memory state" Nov 12 17:49:47.040175 kubelet[2561]: I1112 17:49:47.040156 2561 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:49:47.040395 kubelet[2561]: I1112 17:49:47.040372 2561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:49:47.089699 kubelet[2561]: I1112 17:49:47.089617 2561 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:49:47.096465 kubelet[2561]: I1112 17:49:47.096431 2561 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 17:49:47.096533 kubelet[2561]: I1112 17:49:47.096508 2561 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:49:47.117018 kubelet[2561]: I1112 17:49:47.116849 2561 topology_manager.go:215] "Topology Admit Handler" podUID="7a5668c7b03afc9cb8f0d6aa489dc571" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:49:47.117018 kubelet[2561]: I1112 17:49:47.116937 2561 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:49:47.117018 kubelet[2561]: I1112 17:49:47.116987 2561 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:49:47.187233 kubelet[2561]: I1112 17:49:47.187197 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:47.187339 kubelet[2561]: I1112 17:49:47.187245 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:47.187339 kubelet[2561]: I1112 17:49:47.187267 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:47.187339 kubelet[2561]: I1112 17:49:47.187286 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:49:47.187339 kubelet[2561]: I1112 17:49:47.187307 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:47.187339 kubelet[2561]: I1112 17:49:47.187325 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a5668c7b03afc9cb8f0d6aa489dc571-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a5668c7b03afc9cb8f0d6aa489dc571\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:47.187488 kubelet[2561]: I1112 17:49:47.187341 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a5668c7b03afc9cb8f0d6aa489dc571-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a5668c7b03afc9cb8f0d6aa489dc571\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:47.187488 kubelet[2561]: I1112 17:49:47.187368 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a5668c7b03afc9cb8f0d6aa489dc571-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a5668c7b03afc9cb8f0d6aa489dc571\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:47.187488 kubelet[2561]: I1112 17:49:47.187389 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:49:47.420272 sudo[2577]: pam_unix(sudo:session): session closed for user root Nov 12 17:49:47.426532 kubelet[2561]: E1112 17:49:47.426488 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:47.427163 kubelet[2561]: E1112 17:49:47.427118 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:47.427318 kubelet[2561]: E1112 17:49:47.427204 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:47.979447 kubelet[2561]: I1112 17:49:47.979201 2561 apiserver.go:52] "Watching apiserver" Nov 12 17:49:47.986662 kubelet[2561]: I1112 17:49:47.986619 2561 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:49:48.029566 kubelet[2561]: E1112 17:49:48.027694 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:48.029566 kubelet[2561]: E1112 17:49:48.028040 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:48.035092 kubelet[2561]: E1112 17:49:48.035068 2561 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 17:49:48.035613 kubelet[2561]: E1112 17:49:48.035595 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:48.054891 kubelet[2561]: I1112 17:49:48.054829 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.054791855 podStartE2EDuration="1.054791855s" podCreationTimestamp="2024-11-12 17:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:49:48.047505349 +0000 UTC m=+1.131131545" watchObservedRunningTime="2024-11-12 17:49:48.054791855 +0000 UTC m=+1.138418051" Nov 12 17:49:48.064512 kubelet[2561]: I1112 17:49:48.063853 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.061967274 podStartE2EDuration="1.061967274s" podCreationTimestamp="2024-11-12 17:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:49:48.061855908 +0000 UTC m=+1.145482104" watchObservedRunningTime="2024-11-12 17:49:48.061967274 +0000 UTC m=+1.145593470" Nov 12 17:49:48.064512 kubelet[2561]: I1112 17:49:48.063966 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.063941169 podStartE2EDuration="1.063941169s" podCreationTimestamp="2024-11-12 17:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:49:48.055239803 +0000 UTC m=+1.138865999" watchObservedRunningTime="2024-11-12 17:49:48.063941169 +0000 UTC m=+1.147567325" Nov 12 17:49:49.031319 kubelet[2561]: E1112 17:49:49.031232 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:49.031319 kubelet[2561]: E1112 17:49:49.031250 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:49.033330 kubelet[2561]: E1112 17:49:49.033299 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:49.318339 sudo[1623]: pam_unix(sudo:session): session closed for user root Nov 12 17:49:49.321905 sshd[1620]: pam_unix(sshd:session): session closed for user core Nov 12 17:49:49.325363 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:40620.service: Deactivated successfully. Nov 12 17:49:49.328288 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:49:49.328553 systemd[1]: session-7.scope: Consumed 7.616s CPU time, 188.4M memory peak, 0B memory swap peak. Nov 12 17:49:49.329132 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:49:49.329958 systemd-logind[1426]: Removed session 7. Nov 12 17:49:50.157198 kubelet[2561]: E1112 17:49:50.157167 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:58.125967 kubelet[2561]: E1112 17:49:58.125913 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:49:58.921990 kubelet[2561]: E1112 17:49:58.921646 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:00.119825 update_engine[1430]: I20241112 17:50:00.119729 1430 update_attempter.cc:509] Updating boot flags... Nov 12 17:50:00.152827 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2646) Nov 12 17:50:00.167233 kubelet[2561]: E1112 17:50:00.167204 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:00.186819 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2647) Nov 12 17:50:00.908710 kubelet[2561]: I1112 17:50:00.908673 2561 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:50:00.915996 containerd[1440]: time="2024-11-12T17:50:00.915953027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:50:00.916285 kubelet[2561]: I1112 17:50:00.916131 2561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:50:01.825714 kubelet[2561]: I1112 17:50:01.825659 2561 topology_manager.go:215] "Topology Admit Handler" podUID="e0af9728-f2ea-4111-9c1d-0f94bccf5051" podNamespace="kube-system" podName="kube-proxy-wg9p2" Nov 12 17:50:01.831737 kubelet[2561]: I1112 17:50:01.830897 2561 topology_manager.go:215] "Topology Admit Handler" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" podNamespace="kube-system" podName="cilium-v4czp" Nov 12 17:50:01.838104 systemd[1]: Created slice kubepods-besteffort-pode0af9728_f2ea_4111_9c1d_0f94bccf5051.slice - libcontainer container kubepods-besteffort-pode0af9728_f2ea_4111_9c1d_0f94bccf5051.slice. Nov 12 17:50:01.849537 systemd[1]: Created slice kubepods-burstable-pod0e29fe01_8217_43d4_851e_89ed47be55b4.slice - libcontainer container kubepods-burstable-pod0e29fe01_8217_43d4_851e_89ed47be55b4.slice. Nov 12 17:50:01.885413 kubelet[2561]: I1112 17:50:01.885345 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cni-path\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886218 kubelet[2561]: I1112 17:50:01.886160 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-lib-modules\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886287 kubelet[2561]: I1112 17:50:01.886258 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e29fe01-8217-43d4-851e-89ed47be55b4-clustermesh-secrets\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886326 kubelet[2561]: I1112 17:50:01.886307 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-net\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886357 kubelet[2561]: I1112 17:50:01.886340 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0af9728-f2ea-4111-9c1d-0f94bccf5051-kube-proxy\") pod \"kube-proxy-wg9p2\" (UID: \"e0af9728-f2ea-4111-9c1d-0f94bccf5051\") " pod="kube-system/kube-proxy-wg9p2" Nov 12 17:50:01.886382 kubelet[2561]: I1112 17:50:01.886373 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-xtables-lock\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886402 kubelet[2561]: I1112 17:50:01.886396 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-cgroup\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886424 kubelet[2561]: I1112 17:50:01.886415 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-etc-cni-netd\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886445 kubelet[2561]: I1112 17:50:01.886439 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-config-path\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886535 kubelet[2561]: I1112 17:50:01.886467 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjlk\" (UniqueName: \"kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-kube-api-access-gxjlk\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886535 kubelet[2561]: I1112 17:50:01.886521 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-kernel\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886613 kubelet[2561]: I1112 17:50:01.886546 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0af9728-f2ea-4111-9c1d-0f94bccf5051-xtables-lock\") pod \"kube-proxy-wg9p2\" (UID: \"e0af9728-f2ea-4111-9c1d-0f94bccf5051\") " pod="kube-system/kube-proxy-wg9p2" Nov 12 17:50:01.886613 kubelet[2561]: I1112 17:50:01.886602 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0af9728-f2ea-4111-9c1d-0f94bccf5051-lib-modules\") pod \"kube-proxy-wg9p2\" (UID: \"e0af9728-f2ea-4111-9c1d-0f94bccf5051\") " pod="kube-system/kube-proxy-wg9p2" Nov 12 17:50:01.886710 kubelet[2561]: I1112 17:50:01.886622 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-run\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886710 kubelet[2561]: I1112 17:50:01.886643 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-bpf-maps\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886710 kubelet[2561]: I1112 17:50:01.886691 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-hostproc\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.886815 kubelet[2561]: I1112 17:50:01.886718 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7kb\" (UniqueName: \"kubernetes.io/projected/e0af9728-f2ea-4111-9c1d-0f94bccf5051-kube-api-access-6g7kb\") pod \"kube-proxy-wg9p2\" (UID: \"e0af9728-f2ea-4111-9c1d-0f94bccf5051\") " pod="kube-system/kube-proxy-wg9p2" Nov 12 17:50:01.886815 kubelet[2561]: I1112 17:50:01.886804 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-hubble-tls\") pod \"cilium-v4czp\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " pod="kube-system/cilium-v4czp" Nov 12 17:50:01.932155 kubelet[2561]: I1112 17:50:01.932123 2561 topology_manager.go:215] "Topology Admit Handler" podUID="fba3be93-ad3e-488a-aa06-a948a70906e4" podNamespace="kube-system" podName="cilium-operator-5cc964979-4zm26" Nov 12 17:50:01.941572 systemd[1]: Created slice kubepods-besteffort-podfba3be93_ad3e_488a_aa06_a948a70906e4.slice - libcontainer container kubepods-besteffort-podfba3be93_ad3e_488a_aa06_a948a70906e4.slice. Nov 12 17:50:01.987659 kubelet[2561]: I1112 17:50:01.987611 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fba3be93-ad3e-488a-aa06-a948a70906e4-cilium-config-path\") pod \"cilium-operator-5cc964979-4zm26\" (UID: \"fba3be93-ad3e-488a-aa06-a948a70906e4\") " pod="kube-system/cilium-operator-5cc964979-4zm26" Nov 12 17:50:01.987860 kubelet[2561]: I1112 17:50:01.987833 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wcx\" (UniqueName: \"kubernetes.io/projected/fba3be93-ad3e-488a-aa06-a948a70906e4-kube-api-access-74wcx\") pod \"cilium-operator-5cc964979-4zm26\" (UID: \"fba3be93-ad3e-488a-aa06-a948a70906e4\") " pod="kube-system/cilium-operator-5cc964979-4zm26" Nov 12 17:50:02.148940 kubelet[2561]: E1112 17:50:02.148847 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:02.152357 kubelet[2561]: E1112 17:50:02.152333 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:02.153251 containerd[1440]: time="2024-11-12T17:50:02.152824718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v4czp,Uid:0e29fe01-8217-43d4-851e-89ed47be55b4,Namespace:kube-system,Attempt:0,}" Nov 12 17:50:02.154303 containerd[1440]: time="2024-11-12T17:50:02.154231153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wg9p2,Uid:e0af9728-f2ea-4111-9c1d-0f94bccf5051,Namespace:kube-system,Attempt:0,}" Nov 12 17:50:02.176566 containerd[1440]: time="2024-11-12T17:50:02.176481349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:50:02.176566 containerd[1440]: time="2024-11-12T17:50:02.176522004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:50:02.176566 containerd[1440]: time="2024-11-12T17:50:02.176533408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:02.176566 containerd[1440]: time="2024-11-12T17:50:02.176465063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:50:02.176832 containerd[1440]: time="2024-11-12T17:50:02.176613398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:02.176832 containerd[1440]: time="2024-11-12T17:50:02.176550375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:50:02.176832 containerd[1440]: time="2024-11-12T17:50:02.176608076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:02.177442 containerd[1440]: time="2024-11-12T17:50:02.177368114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:02.194936 systemd[1]: Started cri-containerd-3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9.scope - libcontainer container 3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9. Nov 12 17:50:02.197307 systemd[1]: Started cri-containerd-db94068a75ba14bc4ebd42b7518a5750715ce56eb28318afaaed773a71e34de6.scope - libcontainer container db94068a75ba14bc4ebd42b7518a5750715ce56eb28318afaaed773a71e34de6. Nov 12 17:50:02.214168 containerd[1440]: time="2024-11-12T17:50:02.214129790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v4czp,Uid:0e29fe01-8217-43d4-851e-89ed47be55b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\"" Nov 12 17:50:02.215346 kubelet[2561]: E1112 17:50:02.215097 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:02.220641 containerd[1440]: time="2024-11-12T17:50:02.220608164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wg9p2,Uid:e0af9728-f2ea-4111-9c1d-0f94bccf5051,Namespace:kube-system,Attempt:0,} returns sandbox id \"db94068a75ba14bc4ebd42b7518a5750715ce56eb28318afaaed773a71e34de6\"" Nov 12 17:50:02.222194 kubelet[2561]: E1112 17:50:02.221995 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:02.223610 containerd[1440]: time="2024-11-12T17:50:02.223361814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 17:50:02.228041 containerd[1440]: time="2024-11-12T17:50:02.227986189Z" level=info msg="CreateContainer within sandbox \"db94068a75ba14bc4ebd42b7518a5750715ce56eb28318afaaed773a71e34de6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:50:02.242020 containerd[1440]: time="2024-11-12T17:50:02.241981679Z" level=info msg="CreateContainer within sandbox \"db94068a75ba14bc4ebd42b7518a5750715ce56eb28318afaaed773a71e34de6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01c77f4b84d3a5f8917e649c232481e31ea85250c8900228ac28aee40488c57e\"" Nov 12 17:50:02.242974 containerd[1440]: time="2024-11-12T17:50:02.242442408Z" level=info msg="StartContainer for \"01c77f4b84d3a5f8917e649c232481e31ea85250c8900228ac28aee40488c57e\"" Nov 12 17:50:02.245785 kubelet[2561]: E1112 17:50:02.245365 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:02.245874 containerd[1440]: time="2024-11-12T17:50:02.245706764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4zm26,Uid:fba3be93-ad3e-488a-aa06-a948a70906e4,Namespace:kube-system,Attempt:0,}" Nov 12 17:50:02.264892 containerd[1440]: time="2024-11-12T17:50:02.264817009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:50:02.264892 containerd[1440]: time="2024-11-12T17:50:02.264863787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:50:02.264892 containerd[1440]: time="2024-11-12T17:50:02.264874791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:02.265077 containerd[1440]: time="2024-11-12T17:50:02.264952859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:02.270104 systemd[1]: Started cri-containerd-01c77f4b84d3a5f8917e649c232481e31ea85250c8900228ac28aee40488c57e.scope - libcontainer container 01c77f4b84d3a5f8917e649c232481e31ea85250c8900228ac28aee40488c57e. Nov 12 17:50:02.287999 systemd[1]: Started cri-containerd-be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286.scope - libcontainer container be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286. Nov 12 17:50:02.307279 containerd[1440]: time="2024-11-12T17:50:02.307242761Z" level=info msg="StartContainer for \"01c77f4b84d3a5f8917e649c232481e31ea85250c8900228ac28aee40488c57e\" returns successfully" Nov 12 17:50:02.323293 containerd[1440]: time="2024-11-12T17:50:02.323237584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4zm26,Uid:fba3be93-ad3e-488a-aa06-a948a70906e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286\"" Nov 12 17:50:02.325827 kubelet[2561]: E1112 17:50:02.323878 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:03.061856 kubelet[2561]: E1112 17:50:03.061265 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:03.070023 kubelet[2561]: I1112 17:50:03.069983 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wg9p2" podStartSLOduration=2.069949766 podStartE2EDuration="2.069949766s" podCreationTimestamp="2024-11-12 17:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:50:03.069597963 +0000 UTC m=+16.153224159" watchObservedRunningTime="2024-11-12 17:50:03.069949766 +0000 UTC m=+16.153575962" Nov 12 17:50:11.920348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887395982.mount: Deactivated successfully. Nov 12 17:50:13.857711 containerd[1440]: time="2024-11-12T17:50:13.857445684Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:50:13.858199 containerd[1440]: time="2024-11-12T17:50:13.857898026Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651522" Nov 12 17:50:13.859202 containerd[1440]: time="2024-11-12T17:50:13.859133423Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:50:13.861228 containerd[1440]: time="2024-11-12T17:50:13.860969154Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.637565927s" Nov 12 17:50:13.861228 containerd[1440]: time="2024-11-12T17:50:13.861016245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 12 17:50:13.863871 containerd[1440]: time="2024-11-12T17:50:13.863839718Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 17:50:13.866046 containerd[1440]: time="2024-11-12T17:50:13.865996841Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 17:50:13.895169 containerd[1440]: time="2024-11-12T17:50:13.895114049Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\"" Nov 12 17:50:13.895918 containerd[1440]: time="2024-11-12T17:50:13.895812806Z" level=info msg="StartContainer for \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\"" Nov 12 17:50:13.922174 systemd[1]: Started cri-containerd-1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335.scope - libcontainer container 1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335. Nov 12 17:50:13.944083 containerd[1440]: time="2024-11-12T17:50:13.943883143Z" level=info msg="StartContainer for \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\" returns successfully" Nov 12 17:50:13.985426 systemd[1]: cri-containerd-1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335.scope: Deactivated successfully. Nov 12 17:50:14.015654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335-rootfs.mount: Deactivated successfully. Nov 12 17:50:14.076502 containerd[1440]: time="2024-11-12T17:50:14.071135827Z" level=info msg="shim disconnected" id=1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335 namespace=k8s.io Nov 12 17:50:14.076502 containerd[1440]: time="2024-11-12T17:50:14.076287258Z" level=warning msg="cleaning up after shim disconnected" id=1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335 namespace=k8s.io Nov 12 17:50:14.076502 containerd[1440]: time="2024-11-12T17:50:14.076299740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:50:14.094739 kubelet[2561]: E1112 17:50:14.094583 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:14.097514 containerd[1440]: time="2024-11-12T17:50:14.097235894Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 17:50:14.107119 containerd[1440]: time="2024-11-12T17:50:14.107002920Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\"" Nov 12 17:50:14.108273 containerd[1440]: time="2024-11-12T17:50:14.107420010Z" level=info msg="StartContainer for \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\"" Nov 12 17:50:14.137988 systemd[1]: Started cri-containerd-cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a.scope - libcontainer container cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a. Nov 12 17:50:14.158671 containerd[1440]: time="2024-11-12T17:50:14.158629170Z" level=info msg="StartContainer for \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\" returns successfully" Nov 12 17:50:14.180422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:50:14.180797 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:50:14.180875 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:50:14.189848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:50:14.190057 systemd[1]: cri-containerd-cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a.scope: Deactivated successfully. Nov 12 17:50:14.213050 containerd[1440]: time="2024-11-12T17:50:14.212946560Z" level=info msg="shim disconnected" id=cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a namespace=k8s.io Nov 12 17:50:14.213050 containerd[1440]: time="2024-11-12T17:50:14.213045061Z" level=warning msg="cleaning up after shim disconnected" id=cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a namespace=k8s.io Nov 12 17:50:14.213050 containerd[1440]: time="2024-11-12T17:50:14.213055463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:50:14.218745 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:50:15.099639 kubelet[2561]: E1112 17:50:15.099034 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:15.111543 containerd[1440]: time="2024-11-12T17:50:15.108158215Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 17:50:15.129420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267391045.mount: Deactivated successfully. Nov 12 17:50:15.135822 containerd[1440]: time="2024-11-12T17:50:15.135749901Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\"" Nov 12 17:50:15.139342 containerd[1440]: time="2024-11-12T17:50:15.136605118Z" level=info msg="StartContainer for \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\"" Nov 12 17:50:15.171977 systemd[1]: Started cri-containerd-7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797.scope - libcontainer container 7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797. Nov 12 17:50:15.199612 containerd[1440]: time="2024-11-12T17:50:15.199573105Z" level=info msg="StartContainer for \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\" returns successfully" Nov 12 17:50:15.219319 systemd[1]: cri-containerd-7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797.scope: Deactivated successfully. Nov 12 17:50:15.250264 containerd[1440]: time="2024-11-12T17:50:15.250209133Z" level=info msg="shim disconnected" id=7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797 namespace=k8s.io Nov 12 17:50:15.250264 containerd[1440]: time="2024-11-12T17:50:15.250258623Z" level=warning msg="cleaning up after shim disconnected" id=7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797 namespace=k8s.io Nov 12 17:50:15.250264 containerd[1440]: time="2024-11-12T17:50:15.250267945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:50:15.893703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797-rootfs.mount: Deactivated successfully. Nov 12 17:50:16.103810 kubelet[2561]: E1112 17:50:16.102980 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:16.106822 containerd[1440]: time="2024-11-12T17:50:16.106144398Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 17:50:16.121107 containerd[1440]: time="2024-11-12T17:50:16.120992487Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\"" Nov 12 17:50:16.122406 containerd[1440]: time="2024-11-12T17:50:16.121580805Z" level=info msg="StartContainer for \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\"" Nov 12 17:50:16.133094 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:41112.service - OpenSSH per-connection server daemon (10.0.0.1:41112). Nov 12 17:50:16.157968 systemd[1]: Started cri-containerd-0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3.scope - libcontainer container 0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3. Nov 12 17:50:16.170275 sshd[3163]: Accepted publickey for core from 10.0.0.1 port 41112 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:16.172764 sshd[3163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:16.179093 systemd-logind[1426]: New session 8 of user core. Nov 12 17:50:16.179403 systemd[1]: cri-containerd-0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3.scope: Deactivated successfully. Nov 12 17:50:16.187036 containerd[1440]: time="2024-11-12T17:50:16.183138553Z" level=info msg="StartContainer for \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\" returns successfully" Nov 12 17:50:16.185616 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:50:16.266404 containerd[1440]: time="2024-11-12T17:50:16.266345070Z" level=info msg="shim disconnected" id=0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3 namespace=k8s.io Nov 12 17:50:16.266404 containerd[1440]: time="2024-11-12T17:50:16.266395921Z" level=warning msg="cleaning up after shim disconnected" id=0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3 namespace=k8s.io Nov 12 17:50:16.266404 containerd[1440]: time="2024-11-12T17:50:16.266404162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:50:16.322389 sshd[3163]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:16.325853 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:41112.service: Deactivated successfully. Nov 12 17:50:16.327561 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:50:16.328191 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:50:16.329063 systemd-logind[1426]: Removed session 8. Nov 12 17:50:16.435225 containerd[1440]: time="2024-11-12T17:50:16.435089451Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:50:16.436814 containerd[1440]: time="2024-11-12T17:50:16.436751704Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138358" Nov 12 17:50:16.437469 containerd[1440]: time="2024-11-12T17:50:16.437433800Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:50:16.439458 containerd[1440]: time="2024-11-12T17:50:16.439413916Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.575295977s" Nov 12 17:50:16.439507 containerd[1440]: time="2024-11-12T17:50:16.439455444Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 12 17:50:16.441398 containerd[1440]: time="2024-11-12T17:50:16.441367147Z" level=info msg="CreateContainer within sandbox \"be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 17:50:16.452496 containerd[1440]: time="2024-11-12T17:50:16.452444201Z" level=info msg="CreateContainer within sandbox \"be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\"" Nov 12 17:50:16.454057 containerd[1440]: time="2024-11-12T17:50:16.453087330Z" level=info msg="StartContainer for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\"" Nov 12 17:50:16.478939 systemd[1]: Started cri-containerd-39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b.scope - libcontainer container 39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b. Nov 12 17:50:16.498824 containerd[1440]: time="2024-11-12T17:50:16.498763503Z" level=info msg="StartContainer for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" returns successfully" Nov 12 17:50:16.894651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3-rootfs.mount: Deactivated successfully. Nov 12 17:50:17.108455 kubelet[2561]: E1112 17:50:17.108412 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:17.113051 kubelet[2561]: E1112 17:50:17.112448 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:17.114810 containerd[1440]: time="2024-11-12T17:50:17.114288374Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 17:50:17.142794 containerd[1440]: time="2024-11-12T17:50:17.142719617Z" level=info msg="CreateContainer within sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\"" Nov 12 17:50:17.144044 containerd[1440]: time="2024-11-12T17:50:17.144008786Z" level=info msg="StartContainer for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\"" Nov 12 17:50:17.146039 kubelet[2561]: I1112 17:50:17.145928 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4zm26" podStartSLOduration=2.031386234 podStartE2EDuration="16.145887668s" podCreationTimestamp="2024-11-12 17:50:01 +0000 UTC" firstStartedPulling="2024-11-12 17:50:02.325177655 +0000 UTC m=+15.408803851" lastFinishedPulling="2024-11-12 17:50:16.439679129 +0000 UTC m=+29.523305285" observedRunningTime="2024-11-12 17:50:17.145673747 +0000 UTC m=+30.229299943" watchObservedRunningTime="2024-11-12 17:50:17.145887668 +0000 UTC m=+30.229513864" Nov 12 17:50:17.180968 systemd[1]: Started cri-containerd-df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489.scope - libcontainer container df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489. Nov 12 17:50:17.212426 containerd[1440]: time="2024-11-12T17:50:17.212382812Z" level=info msg="StartContainer for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" returns successfully" Nov 12 17:50:17.344873 kubelet[2561]: I1112 17:50:17.343347 2561 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:50:17.375845 kubelet[2561]: I1112 17:50:17.375448 2561 topology_manager.go:215] "Topology Admit Handler" podUID="00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a" podNamespace="kube-system" podName="coredns-76f75df574-cdj9w" Nov 12 17:50:17.379667 kubelet[2561]: I1112 17:50:17.377989 2561 topology_manager.go:215] "Topology Admit Handler" podUID="efc82ff2-3fd1-4338-b500-c27912917aa6" podNamespace="kube-system" podName="coredns-76f75df574-vwr4h" Nov 12 17:50:17.389057 kubelet[2561]: I1112 17:50:17.388975 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a-config-volume\") pod \"coredns-76f75df574-cdj9w\" (UID: \"00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a\") " pod="kube-system/coredns-76f75df574-cdj9w" Nov 12 17:50:17.389044 systemd[1]: Created slice kubepods-burstable-pod00b213e1_c07e_497d_9a4a_c4bf9c9cbd6a.slice - libcontainer container kubepods-burstable-pod00b213e1_c07e_497d_9a4a_c4bf9c9cbd6a.slice. Nov 12 17:50:17.389370 kubelet[2561]: I1112 17:50:17.389339 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnf4\" (UniqueName: \"kubernetes.io/projected/00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a-kube-api-access-mfnf4\") pod \"coredns-76f75df574-cdj9w\" (UID: \"00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a\") " pod="kube-system/coredns-76f75df574-cdj9w" Nov 12 17:50:17.389413 kubelet[2561]: I1112 17:50:17.389387 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efc82ff2-3fd1-4338-b500-c27912917aa6-config-volume\") pod \"coredns-76f75df574-vwr4h\" (UID: \"efc82ff2-3fd1-4338-b500-c27912917aa6\") " pod="kube-system/coredns-76f75df574-vwr4h" Nov 12 17:50:17.389438 kubelet[2561]: I1112 17:50:17.389414 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stg8l\" (UniqueName: \"kubernetes.io/projected/efc82ff2-3fd1-4338-b500-c27912917aa6-kube-api-access-stg8l\") pod \"coredns-76f75df574-vwr4h\" (UID: \"efc82ff2-3fd1-4338-b500-c27912917aa6\") " pod="kube-system/coredns-76f75df574-vwr4h" Nov 12 17:50:17.400145 systemd[1]: Created slice kubepods-burstable-podefc82ff2_3fd1_4338_b500_c27912917aa6.slice - libcontainer container kubepods-burstable-podefc82ff2_3fd1_4338_b500_c27912917aa6.slice. Nov 12 17:50:17.696420 kubelet[2561]: E1112 17:50:17.696110 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:17.697063 containerd[1440]: time="2024-11-12T17:50:17.697023680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cdj9w,Uid:00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a,Namespace:kube-system,Attempt:0,}" Nov 12 17:50:17.706647 kubelet[2561]: E1112 17:50:17.706580 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:17.707333 containerd[1440]: time="2024-11-12T17:50:17.707246531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vwr4h,Uid:efc82ff2-3fd1-4338-b500-c27912917aa6,Namespace:kube-system,Attempt:0,}" Nov 12 17:50:18.117368 kubelet[2561]: E1112 17:50:18.117135 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:18.117368 kubelet[2561]: E1112 17:50:18.117204 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:18.132832 kubelet[2561]: I1112 17:50:18.130726 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v4czp" podStartSLOduration=5.490708267 podStartE2EDuration="17.130686732s" podCreationTimestamp="2024-11-12 17:50:01 +0000 UTC" firstStartedPulling="2024-11-12 17:50:02.222570844 +0000 UTC m=+15.306197040" lastFinishedPulling="2024-11-12 17:50:13.862549268 +0000 UTC m=+26.946175505" observedRunningTime="2024-11-12 17:50:18.129737195 +0000 UTC m=+31.213363471" watchObservedRunningTime="2024-11-12 17:50:18.130686732 +0000 UTC m=+31.214312928" Nov 12 17:50:19.118995 kubelet[2561]: E1112 17:50:19.118963 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:20.120431 kubelet[2561]: E1112 17:50:20.120394 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:20.237205 systemd-networkd[1384]: cilium_host: Link UP Nov 12 17:50:20.237346 systemd-networkd[1384]: cilium_net: Link UP Nov 12 17:50:20.237483 systemd-networkd[1384]: cilium_net: Gained carrier Nov 12 17:50:20.237595 systemd-networkd[1384]: cilium_host: Gained carrier Nov 12 17:50:20.237694 systemd-networkd[1384]: cilium_net: Gained IPv6LL Nov 12 17:50:20.237843 systemd-networkd[1384]: cilium_host: Gained IPv6LL Nov 12 17:50:20.326100 systemd-networkd[1384]: cilium_vxlan: Link UP Nov 12 17:50:20.326106 systemd-networkd[1384]: cilium_vxlan: Gained carrier Nov 12 17:50:20.623821 kernel: NET: Registered PF_ALG protocol family Nov 12 17:50:21.237254 systemd-networkd[1384]: lxc_health: Link UP Nov 12 17:50:21.249870 systemd-networkd[1384]: lxc_health: Gained carrier Nov 12 17:50:21.343057 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:41128.service - OpenSSH per-connection server daemon (10.0.0.1:41128). Nov 12 17:50:21.374410 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 41128 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:21.377659 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:21.389665 systemd-logind[1426]: New session 9 of user core. Nov 12 17:50:21.395978 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:50:21.483909 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Nov 12 17:50:21.523316 sshd[3770]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:21.526125 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:41128.service: Deactivated successfully. Nov 12 17:50:21.528032 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:50:21.529591 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:50:21.532120 systemd-logind[1426]: Removed session 9. Nov 12 17:50:21.827898 systemd-networkd[1384]: lxc11d94495fba0: Link UP Nov 12 17:50:21.841958 systemd-networkd[1384]: lxcc97a28991523: Link UP Nov 12 17:50:21.852813 kernel: eth0: renamed from tmp9c14c Nov 12 17:50:21.863864 kernel: eth0: renamed from tmp1b6a8 Nov 12 17:50:21.868061 systemd-networkd[1384]: lxcc97a28991523: Gained carrier Nov 12 17:50:21.868249 systemd-networkd[1384]: lxc11d94495fba0: Gained carrier Nov 12 17:50:22.163576 kubelet[2561]: E1112 17:50:22.163290 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:22.380003 systemd-networkd[1384]: lxc_health: Gained IPv6LL Nov 12 17:50:23.125367 kubelet[2561]: E1112 17:50:23.125337 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:23.404901 systemd-networkd[1384]: lxcc97a28991523: Gained IPv6LL Nov 12 17:50:23.595991 systemd-networkd[1384]: lxc11d94495fba0: Gained IPv6LL Nov 12 17:50:25.381580 containerd[1440]: time="2024-11-12T17:50:25.381466682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:50:25.381580 containerd[1440]: time="2024-11-12T17:50:25.381536133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:50:25.381580 containerd[1440]: time="2024-11-12T17:50:25.381547614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:25.382161 containerd[1440]: time="2024-11-12T17:50:25.381631067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:25.390875 containerd[1440]: time="2024-11-12T17:50:25.390747474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:50:25.391417 containerd[1440]: time="2024-11-12T17:50:25.391177899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:50:25.391417 containerd[1440]: time="2024-11-12T17:50:25.391195621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:25.391417 containerd[1440]: time="2024-11-12T17:50:25.391289155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:50:25.417062 systemd[1]: Started cri-containerd-1b6a85eb54f2ff4a67bdef581f3833ce99baff9791c6c4fc52a7c03bb9ddfd39.scope - libcontainer container 1b6a85eb54f2ff4a67bdef581f3833ce99baff9791c6c4fc52a7c03bb9ddfd39. Nov 12 17:50:25.419154 systemd[1]: Started cri-containerd-9c14cf1e84659366379b1c09f714da7fb5e8649b97eba6ab54887177e322a275.scope - libcontainer container 9c14cf1e84659366379b1c09f714da7fb5e8649b97eba6ab54887177e322a275. Nov 12 17:50:25.432815 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:50:25.436947 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:50:25.455860 containerd[1440]: time="2024-11-12T17:50:25.455820352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vwr4h,Uid:efc82ff2-3fd1-4338-b500-c27912917aa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b6a85eb54f2ff4a67bdef581f3833ce99baff9791c6c4fc52a7c03bb9ddfd39\"" Nov 12 17:50:25.457082 containerd[1440]: time="2024-11-12T17:50:25.457039535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cdj9w,Uid:00b213e1-c07e-497d-9a4a-c4bf9c9cbd6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c14cf1e84659366379b1c09f714da7fb5e8649b97eba6ab54887177e322a275\"" Nov 12 17:50:25.457381 kubelet[2561]: E1112 17:50:25.457186 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:25.458990 kubelet[2561]: E1112 17:50:25.458876 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:25.462303 containerd[1440]: time="2024-11-12T17:50:25.462242275Z" level=info msg="CreateContainer within sandbox \"1b6a85eb54f2ff4a67bdef581f3833ce99baff9791c6c4fc52a7c03bb9ddfd39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:50:25.462798 containerd[1440]: time="2024-11-12T17:50:25.462473829Z" level=info msg="CreateContainer within sandbox \"9c14cf1e84659366379b1c09f714da7fb5e8649b97eba6ab54887177e322a275\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:50:25.478283 containerd[1440]: time="2024-11-12T17:50:25.478237873Z" level=info msg="CreateContainer within sandbox \"1b6a85eb54f2ff4a67bdef581f3833ce99baff9791c6c4fc52a7c03bb9ddfd39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d523d9c760a9bc547e16ae0851c45a1465ecc1757c7dc66995de091e5e12461\"" Nov 12 17:50:25.478689 containerd[1440]: time="2024-11-12T17:50:25.478666298Z" level=info msg="StartContainer for \"5d523d9c760a9bc547e16ae0851c45a1465ecc1757c7dc66995de091e5e12461\"" Nov 12 17:50:25.485651 containerd[1440]: time="2024-11-12T17:50:25.485579214Z" level=info msg="CreateContainer within sandbox \"9c14cf1e84659366379b1c09f714da7fb5e8649b97eba6ab54887177e322a275\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"def64a77db985064d62864dd0937a93508a7a8dc5b8828eb2a0cf6a7c1a4355d\"" Nov 12 17:50:25.486685 containerd[1440]: time="2024-11-12T17:50:25.486322366Z" level=info msg="StartContainer for \"def64a77db985064d62864dd0937a93508a7a8dc5b8828eb2a0cf6a7c1a4355d\"" Nov 12 17:50:25.507964 systemd[1]: Started cri-containerd-5d523d9c760a9bc547e16ae0851c45a1465ecc1757c7dc66995de091e5e12461.scope - libcontainer container 5d523d9c760a9bc547e16ae0851c45a1465ecc1757c7dc66995de091e5e12461. Nov 12 17:50:25.511449 systemd[1]: Started cri-containerd-def64a77db985064d62864dd0937a93508a7a8dc5b8828eb2a0cf6a7c1a4355d.scope - libcontainer container def64a77db985064d62864dd0937a93508a7a8dc5b8828eb2a0cf6a7c1a4355d. Nov 12 17:50:25.539717 containerd[1440]: time="2024-11-12T17:50:25.539597514Z" level=info msg="StartContainer for \"5d523d9c760a9bc547e16ae0851c45a1465ecc1757c7dc66995de091e5e12461\" returns successfully" Nov 12 17:50:25.548464 containerd[1440]: time="2024-11-12T17:50:25.548200884Z" level=info msg="StartContainer for \"def64a77db985064d62864dd0937a93508a7a8dc5b8828eb2a0cf6a7c1a4355d\" returns successfully" Nov 12 17:50:26.133386 kubelet[2561]: E1112 17:50:26.132970 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:26.135942 kubelet[2561]: E1112 17:50:26.135669 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:26.154870 kubelet[2561]: I1112 17:50:26.153975 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cdj9w" podStartSLOduration=25.153933111 podStartE2EDuration="25.153933111s" podCreationTimestamp="2024-11-12 17:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:50:26.143382931 +0000 UTC m=+39.227009127" watchObservedRunningTime="2024-11-12 17:50:26.153933111 +0000 UTC m=+39.237559267" Nov 12 17:50:26.154870 kubelet[2561]: I1112 17:50:26.154060 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vwr4h" podStartSLOduration=25.154045327 podStartE2EDuration="25.154045327s" podCreationTimestamp="2024-11-12 17:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:50:26.153001255 +0000 UTC m=+39.236627451" watchObservedRunningTime="2024-11-12 17:50:26.154045327 +0000 UTC m=+39.237671523" Nov 12 17:50:26.537630 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:37450.service - OpenSSH per-connection server daemon (10.0.0.1:37450). Nov 12 17:50:26.573418 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 37450 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:26.575086 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:26.579110 systemd-logind[1426]: New session 10 of user core. Nov 12 17:50:26.589967 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:50:26.753953 sshd[3987]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:26.756450 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:50:26.757754 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:50:26.757925 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:37450.service: Deactivated successfully. Nov 12 17:50:26.761280 systemd-logind[1426]: Removed session 10. Nov 12 17:50:27.137149 kubelet[2561]: E1112 17:50:27.137111 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:27.137946 kubelet[2561]: E1112 17:50:27.137922 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:28.140331 kubelet[2561]: E1112 17:50:28.140259 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:28.141744 kubelet[2561]: E1112 17:50:28.141662 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:50:31.771508 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:37466.service - OpenSSH per-connection server daemon (10.0.0.1:37466). Nov 12 17:50:31.808674 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 37466 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:31.810048 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:31.813821 systemd-logind[1426]: New session 11 of user core. Nov 12 17:50:31.825972 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:50:31.942198 sshd[4004]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:31.952407 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:37466.service: Deactivated successfully. Nov 12 17:50:31.956171 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:50:31.956830 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:50:31.963106 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:37482.service - OpenSSH per-connection server daemon (10.0.0.1:37482). Nov 12 17:50:31.965925 systemd-logind[1426]: Removed session 11. Nov 12 17:50:31.995047 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 37482 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:31.996332 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:32.000040 systemd-logind[1426]: New session 12 of user core. Nov 12 17:50:32.006015 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:50:32.158748 sshd[4019]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:32.170659 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:37482.service: Deactivated successfully. Nov 12 17:50:32.177072 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:50:32.179446 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:50:32.189504 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:37496.service - OpenSSH per-connection server daemon (10.0.0.1:37496). Nov 12 17:50:32.192125 systemd-logind[1426]: Removed session 12. Nov 12 17:50:32.225566 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 37496 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:32.226955 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:32.231382 systemd-logind[1426]: New session 13 of user core. Nov 12 17:50:32.242946 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:50:32.354879 sshd[4032]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:32.357382 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:37496.service: Deactivated successfully. Nov 12 17:50:32.359249 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:50:32.360626 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:50:32.361760 systemd-logind[1426]: Removed session 13. Nov 12 17:50:37.367444 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:42846.service - OpenSSH per-connection server daemon (10.0.0.1:42846). Nov 12 17:50:37.407305 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 42846 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:37.409003 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:37.415741 systemd-logind[1426]: New session 14 of user core. Nov 12 17:50:37.427925 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:50:37.542661 sshd[4048]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:37.545775 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:42846.service: Deactivated successfully. Nov 12 17:50:37.547737 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:50:37.548487 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:50:37.549428 systemd-logind[1426]: Removed session 14. Nov 12 17:50:42.553356 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:36186.service - OpenSSH per-connection server daemon (10.0.0.1:36186). Nov 12 17:50:42.585417 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 36186 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:42.586565 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:42.590100 systemd-logind[1426]: New session 15 of user core. Nov 12 17:50:42.600948 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:50:42.711066 sshd[4062]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:42.721438 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:36186.service: Deactivated successfully. Nov 12 17:50:42.725883 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:50:42.727844 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:50:42.738027 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:36190.service - OpenSSH per-connection server daemon (10.0.0.1:36190). Nov 12 17:50:42.739858 systemd-logind[1426]: Removed session 15. Nov 12 17:50:42.767674 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 36190 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:42.768332 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:42.772434 systemd-logind[1426]: New session 16 of user core. Nov 12 17:50:42.778920 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:50:43.040012 sshd[4077]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:43.046622 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:36190.service: Deactivated successfully. Nov 12 17:50:43.048324 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:50:43.049715 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:50:43.050981 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:36194.service - OpenSSH per-connection server daemon (10.0.0.1:36194). Nov 12 17:50:43.052435 systemd-logind[1426]: Removed session 16. Nov 12 17:50:43.090386 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 36194 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:43.091817 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:43.096584 systemd-logind[1426]: New session 17 of user core. Nov 12 17:50:43.101941 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:50:44.308575 sshd[4089]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:44.321040 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:36194.service: Deactivated successfully. Nov 12 17:50:44.323302 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:50:44.325413 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:50:44.333137 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:36208.service - OpenSSH per-connection server daemon (10.0.0.1:36208). Nov 12 17:50:44.337235 systemd-logind[1426]: Removed session 17. Nov 12 17:50:44.363522 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 36208 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:44.364670 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:44.368944 systemd-logind[1426]: New session 18 of user core. Nov 12 17:50:44.375916 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:50:44.590469 sshd[4110]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:44.602699 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:36208.service: Deactivated successfully. Nov 12 17:50:44.605215 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:50:44.607006 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:50:44.617248 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:36216.service - OpenSSH per-connection server daemon (10.0.0.1:36216). Nov 12 17:50:44.618242 systemd-logind[1426]: Removed session 18. Nov 12 17:50:44.648896 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 36216 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:44.650356 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:44.654309 systemd-logind[1426]: New session 19 of user core. Nov 12 17:50:44.665952 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:50:44.778636 sshd[4123]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:44.782070 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:36216.service: Deactivated successfully. Nov 12 17:50:44.785547 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:50:44.786654 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:50:44.787489 systemd-logind[1426]: Removed session 19. Nov 12 17:50:49.790598 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). Nov 12 17:50:49.822268 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:49.823454 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:49.827821 systemd-logind[1426]: New session 20 of user core. Nov 12 17:50:49.831949 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:50:49.936623 sshd[4142]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:49.939440 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:50:49.939615 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:36230.service: Deactivated successfully. Nov 12 17:50:49.941222 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:50:49.942726 systemd-logind[1426]: Removed session 20. Nov 12 17:50:54.947739 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). Nov 12 17:50:54.980087 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:50:54.981396 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:50:54.985563 systemd-logind[1426]: New session 21 of user core. Nov 12 17:50:54.992945 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 17:50:55.096968 sshd[4156]: pam_unix(sshd:session): session closed for user core Nov 12 17:50:55.100849 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:53088.service: Deactivated successfully. Nov 12 17:50:55.102503 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 17:50:55.103865 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Nov 12 17:50:55.104684 systemd-logind[1426]: Removed session 21. Nov 12 17:51:00.110658 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:53104.service - OpenSSH per-connection server daemon (10.0.0.1:53104). Nov 12 17:51:00.143247 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 53104 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:51:00.144438 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:51:00.148343 systemd-logind[1426]: New session 22 of user core. Nov 12 17:51:00.157942 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 17:51:00.264125 sshd[4170]: pam_unix(sshd:session): session closed for user core Nov 12 17:51:00.281381 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:53104.service: Deactivated successfully. Nov 12 17:51:00.282875 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 17:51:00.284159 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Nov 12 17:51:00.293039 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Nov 12 17:51:00.294001 systemd-logind[1426]: Removed session 22. Nov 12 17:51:00.322807 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:51:00.323608 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:51:00.327058 systemd-logind[1426]: New session 23 of user core. Nov 12 17:51:00.336996 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 17:51:02.905563 containerd[1440]: time="2024-11-12T17:51:02.905507723Z" level=info msg="StopContainer for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" with timeout 30 (s)" Nov 12 17:51:02.908219 containerd[1440]: time="2024-11-12T17:51:02.905937216Z" level=info msg="Stop container \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" with signal terminated" Nov 12 17:51:02.918977 systemd[1]: cri-containerd-39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b.scope: Deactivated successfully. Nov 12 17:51:02.921643 containerd[1440]: time="2024-11-12T17:51:02.921393331Z" level=info msg="StopContainer for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" with timeout 2 (s)" Nov 12 17:51:02.921897 containerd[1440]: time="2024-11-12T17:51:02.921801025Z" level=info msg="Stop container \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" with signal terminated" Nov 12 17:51:02.923207 containerd[1440]: time="2024-11-12T17:51:02.923150741Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:51:02.927872 systemd-networkd[1384]: lxc_health: Link DOWN Nov 12 17:51:02.927879 systemd-networkd[1384]: lxc_health: Lost carrier Nov 12 17:51:02.939557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b-rootfs.mount: Deactivated successfully. Nov 12 17:51:02.943506 systemd[1]: cri-containerd-df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489.scope: Deactivated successfully. Nov 12 17:51:02.944433 systemd[1]: cri-containerd-df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489.scope: Consumed 6.451s CPU time. Nov 12 17:51:02.959317 containerd[1440]: time="2024-11-12T17:51:02.959249405Z" level=info msg="shim disconnected" id=39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b namespace=k8s.io Nov 12 17:51:02.959550 containerd[1440]: time="2024-11-12T17:51:02.959482991Z" level=warning msg="cleaning up after shim disconnected" id=39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b namespace=k8s.io Nov 12 17:51:02.959550 containerd[1440]: time="2024-11-12T17:51:02.959499190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:02.970930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489-rootfs.mount: Deactivated successfully. Nov 12 17:51:02.974513 containerd[1440]: time="2024-11-12T17:51:02.974379020Z" level=info msg="shim disconnected" id=df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489 namespace=k8s.io Nov 12 17:51:02.974513 containerd[1440]: time="2024-11-12T17:51:02.974516291Z" level=warning msg="cleaning up after shim disconnected" id=df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489 namespace=k8s.io Nov 12 17:51:02.974641 containerd[1440]: time="2024-11-12T17:51:02.974528531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:03.001859 containerd[1440]: time="2024-11-12T17:51:03.001682994Z" level=info msg="StopContainer for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" returns successfully" Nov 12 17:51:03.002451 containerd[1440]: time="2024-11-12T17:51:03.002356234Z" level=info msg="StopPodSandbox for \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\"" Nov 12 17:51:03.002451 containerd[1440]: time="2024-11-12T17:51:03.002399952Z" level=info msg="Container to stop \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:51:03.002451 containerd[1440]: time="2024-11-12T17:51:03.002422230Z" level=info msg="Container to stop \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:51:03.002451 containerd[1440]: time="2024-11-12T17:51:03.002431990Z" level=info msg="Container to stop \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:51:03.002451 containerd[1440]: time="2024-11-12T17:51:03.002441789Z" level=info msg="Container to stop \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:51:03.002451 containerd[1440]: time="2024-11-12T17:51:03.002450429Z" level=info msg="Container to stop \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:51:03.004739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9-shm.mount: Deactivated successfully. Nov 12 17:51:03.008378 systemd[1]: cri-containerd-3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9.scope: Deactivated successfully. Nov 12 17:51:03.009142 containerd[1440]: time="2024-11-12T17:51:03.009028364Z" level=info msg="StopContainer for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" returns successfully" Nov 12 17:51:03.009635 containerd[1440]: time="2024-11-12T17:51:03.009556173Z" level=info msg="StopPodSandbox for \"be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286\"" Nov 12 17:51:03.009635 containerd[1440]: time="2024-11-12T17:51:03.009597211Z" level=info msg="Container to stop \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:51:03.011672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286-shm.mount: Deactivated successfully. Nov 12 17:51:03.020094 systemd[1]: cri-containerd-be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286.scope: Deactivated successfully. Nov 12 17:51:03.035453 containerd[1440]: time="2024-11-12T17:51:03.035395581Z" level=info msg="shim disconnected" id=3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9 namespace=k8s.io Nov 12 17:51:03.035453 containerd[1440]: time="2024-11-12T17:51:03.035447978Z" level=warning msg="cleaning up after shim disconnected" id=3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9 namespace=k8s.io Nov 12 17:51:03.035453 containerd[1440]: time="2024-11-12T17:51:03.035456978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:03.045681 containerd[1440]: time="2024-11-12T17:51:03.045492350Z" level=info msg="shim disconnected" id=be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286 namespace=k8s.io Nov 12 17:51:03.045681 containerd[1440]: time="2024-11-12T17:51:03.045555507Z" level=warning msg="cleaning up after shim disconnected" id=be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286 namespace=k8s.io Nov 12 17:51:03.045681 containerd[1440]: time="2024-11-12T17:51:03.045563506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:03.057613 containerd[1440]: time="2024-11-12T17:51:03.057566044Z" level=info msg="TearDown network for sandbox \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" successfully" Nov 12 17:51:03.057613 containerd[1440]: time="2024-11-12T17:51:03.057602122Z" level=info msg="StopPodSandbox for \"3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9\" returns successfully" Nov 12 17:51:03.063330 containerd[1440]: time="2024-11-12T17:51:03.063298029Z" level=info msg="TearDown network for sandbox \"be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286\" successfully" Nov 12 17:51:03.063817 containerd[1440]: time="2024-11-12T17:51:03.063469339Z" level=info msg="StopPodSandbox for \"be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286\" returns successfully" Nov 12 17:51:03.222696 kubelet[2561]: I1112 17:51:03.222586 2561 scope.go:117] "RemoveContainer" containerID="df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489" Nov 12 17:51:03.224001 containerd[1440]: time="2024-11-12T17:51:03.223969828Z" level=info msg="RemoveContainer for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\"" Nov 12 17:51:03.232377 kubelet[2561]: I1112 17:51:03.232345 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-hubble-tls\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.232443 kubelet[2561]: I1112 17:51:03.232389 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e29fe01-8217-43d4-851e-89ed47be55b4-clustermesh-secrets\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.232443 kubelet[2561]: I1112 17:51:03.232414 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-kernel\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237087 kubelet[2561]: I1112 17:51:03.237061 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-etc-cni-netd\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237165 kubelet[2561]: I1112 17:51:03.237102 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cni-path\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237165 kubelet[2561]: I1112 17:51:03.237124 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-lib-modules\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237165 kubelet[2561]: I1112 17:51:03.237141 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-net\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237165 kubelet[2561]: I1112 17:51:03.237158 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-run\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237257 kubelet[2561]: I1112 17:51:03.237179 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fba3be93-ad3e-488a-aa06-a948a70906e4-cilium-config-path\") pod \"fba3be93-ad3e-488a-aa06-a948a70906e4\" (UID: \"fba3be93-ad3e-488a-aa06-a948a70906e4\") " Nov 12 17:51:03.237257 kubelet[2561]: I1112 17:51:03.237200 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-config-path\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237257 kubelet[2561]: I1112 17:51:03.237223 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxjlk\" (UniqueName: \"kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-kube-api-access-gxjlk\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237257 kubelet[2561]: I1112 17:51:03.237242 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-cgroup\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237342 kubelet[2561]: I1112 17:51:03.237260 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-xtables-lock\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237342 kubelet[2561]: I1112 17:51:03.237277 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-bpf-maps\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237342 kubelet[2561]: I1112 17:51:03.237294 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-hostproc\") pod \"0e29fe01-8217-43d4-851e-89ed47be55b4\" (UID: \"0e29fe01-8217-43d4-851e-89ed47be55b4\") " Nov 12 17:51:03.237342 kubelet[2561]: I1112 17:51:03.237318 2561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74wcx\" (UniqueName: \"kubernetes.io/projected/fba3be93-ad3e-488a-aa06-a948a70906e4-kube-api-access-74wcx\") pod \"fba3be93-ad3e-488a-aa06-a948a70906e4\" (UID: \"fba3be93-ad3e-488a-aa06-a948a70906e4\") " Nov 12 17:51:03.241343 containerd[1440]: time="2024-11-12T17:51:03.241172781Z" level=info msg="RemoveContainer for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" returns successfully" Nov 12 17:51:03.241537 kubelet[2561]: I1112 17:51:03.241502 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.241685 kubelet[2561]: I1112 17:51:03.241671 2561 scope.go:117] "RemoveContainer" containerID="0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3" Nov 12 17:51:03.249945 kubelet[2561]: I1112 17:51:03.243725 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba3be93-ad3e-488a-aa06-a948a70906e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fba3be93-ad3e-488a-aa06-a948a70906e4" (UID: "fba3be93-ad3e-488a-aa06-a948a70906e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 17:51:03.249945 kubelet[2561]: I1112 17:51:03.243832 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.249945 kubelet[2561]: I1112 17:51:03.243860 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cni-path" (OuterVolumeSpecName: "cni-path") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.249945 kubelet[2561]: I1112 17:51:03.243877 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.249945 kubelet[2561]: I1112 17:51:03.243892 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.251000 kubelet[2561]: I1112 17:51:03.243910 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.251000 kubelet[2561]: I1112 17:51:03.244694 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 17:51:03.251995 kubelet[2561]: I1112 17:51:03.251262 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.251995 kubelet[2561]: I1112 17:51:03.251297 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.251995 kubelet[2561]: I1112 17:51:03.251322 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.251995 kubelet[2561]: I1112 17:51:03.251830 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 17:51:03.252146 containerd[1440]: time="2024-11-12T17:51:03.252031426Z" level=info msg="RemoveContainer for \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\"" Nov 12 17:51:03.252234 kubelet[2561]: I1112 17:51:03.252209 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-hostproc" (OuterVolumeSpecName: "hostproc") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:51:03.252437 kubelet[2561]: I1112 17:51:03.252401 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29fe01-8217-43d4-851e-89ed47be55b4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 17:51:03.253148 kubelet[2561]: I1112 17:51:03.253094 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba3be93-ad3e-488a-aa06-a948a70906e4-kube-api-access-74wcx" (OuterVolumeSpecName: "kube-api-access-74wcx") pod "fba3be93-ad3e-488a-aa06-a948a70906e4" (UID: "fba3be93-ad3e-488a-aa06-a948a70906e4"). InnerVolumeSpecName "kube-api-access-74wcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 17:51:03.253315 kubelet[2561]: I1112 17:51:03.253284 2561 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-kube-api-access-gxjlk" (OuterVolumeSpecName: "kube-api-access-gxjlk") pod "0e29fe01-8217-43d4-851e-89ed47be55b4" (UID: "0e29fe01-8217-43d4-851e-89ed47be55b4"). InnerVolumeSpecName "kube-api-access-gxjlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 17:51:03.254488 containerd[1440]: time="2024-11-12T17:51:03.254463884Z" level=info msg="RemoveContainer for \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\" returns successfully" Nov 12 17:51:03.254655 kubelet[2561]: I1112 17:51:03.254631 2561 scope.go:117] "RemoveContainer" containerID="7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797" Nov 12 17:51:03.255650 containerd[1440]: time="2024-11-12T17:51:03.255626496Z" level=info msg="RemoveContainer for \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\"" Nov 12 17:51:03.257895 containerd[1440]: time="2024-11-12T17:51:03.257871084Z" level=info msg="RemoveContainer for \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\" returns successfully" Nov 12 17:51:03.258141 kubelet[2561]: I1112 17:51:03.258120 2561 scope.go:117] "RemoveContainer" containerID="cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a" Nov 12 17:51:03.259160 containerd[1440]: time="2024-11-12T17:51:03.259136210Z" level=info msg="RemoveContainer for \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\"" Nov 12 17:51:03.261230 containerd[1440]: time="2024-11-12T17:51:03.261203889Z" level=info msg="RemoveContainer for \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\" returns successfully" Nov 12 17:51:03.261471 kubelet[2561]: I1112 17:51:03.261450 2561 scope.go:117] "RemoveContainer" containerID="1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335" Nov 12 17:51:03.262439 containerd[1440]: time="2024-11-12T17:51:03.262414659Z" level=info msg="RemoveContainer for \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\"" Nov 12 17:51:03.264369 containerd[1440]: time="2024-11-12T17:51:03.264341906Z" level=info msg="RemoveContainer for \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\" returns successfully" Nov 12 17:51:03.264811 kubelet[2561]: I1112 17:51:03.264496 2561 scope.go:117] "RemoveContainer" containerID="df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489" Nov 12 17:51:03.264888 containerd[1440]: time="2024-11-12T17:51:03.264692085Z" level=error msg="ContainerStatus for \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\": not found" Nov 12 17:51:03.274703 kubelet[2561]: E1112 17:51:03.274655 2561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\": not found" containerID="df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489" Nov 12 17:51:03.277810 kubelet[2561]: I1112 17:51:03.277758 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489"} err="failed to get container status \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\": rpc error: code = NotFound desc = an error occurred when try to find container \"df70229d0899bc709ddce92f3f569fe3a26ad94e328de4e3257d1987821d6489\": not found" Nov 12 17:51:03.277810 kubelet[2561]: I1112 17:51:03.277807 2561 scope.go:117] "RemoveContainer" containerID="0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3" Nov 12 17:51:03.278070 containerd[1440]: time="2024-11-12T17:51:03.278031145Z" level=error msg="ContainerStatus for \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\": not found" Nov 12 17:51:03.278498 kubelet[2561]: E1112 17:51:03.278209 2561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\": not found" containerID="0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3" Nov 12 17:51:03.278498 kubelet[2561]: I1112 17:51:03.278241 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3"} err="failed to get container status \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0468fc920da5fa6bd3908a6a9c4284bae9ff0d20d747b09fbb340c8ed48e9fd3\": not found" Nov 12 17:51:03.278498 kubelet[2561]: I1112 17:51:03.278252 2561 scope.go:117] "RemoveContainer" containerID="7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797" Nov 12 17:51:03.278623 containerd[1440]: time="2024-11-12T17:51:03.278430402Z" level=error msg="ContainerStatus for \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\": not found" Nov 12 17:51:03.278651 kubelet[2561]: E1112 17:51:03.278563 2561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\": not found" containerID="7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797" Nov 12 17:51:03.278651 kubelet[2561]: I1112 17:51:03.278588 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797"} err="failed to get container status \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b2f6500355c8175b113560fdb94b55fda75e7356ad116308cd97c07cc4b6797\": not found" Nov 12 17:51:03.278651 kubelet[2561]: I1112 17:51:03.278597 2561 scope.go:117] "RemoveContainer" containerID="cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a" Nov 12 17:51:03.278856 containerd[1440]: time="2024-11-12T17:51:03.278828538Z" level=error msg="ContainerStatus for \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\": not found" Nov 12 17:51:03.279012 kubelet[2561]: E1112 17:51:03.278994 2561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\": not found" containerID="cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a" Nov 12 17:51:03.279048 kubelet[2561]: I1112 17:51:03.279035 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a"} err="failed to get container status \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdfdcf97fee8542f143b6c4066930770e0c13027e43de6548affb5977879a13a\": not found" Nov 12 17:51:03.279048 kubelet[2561]: I1112 17:51:03.279047 2561 scope.go:117] "RemoveContainer" containerID="1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335" Nov 12 17:51:03.279244 containerd[1440]: time="2024-11-12T17:51:03.279215516Z" level=error msg="ContainerStatus for \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\": not found" Nov 12 17:51:03.279469 kubelet[2561]: E1112 17:51:03.279450 2561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\": not found" containerID="1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335" Nov 12 17:51:03.279519 kubelet[2561]: I1112 17:51:03.279480 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335"} err="failed to get container status \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f96b18d0116120348cf4b0f088cce8186c67d306b5ade5f88df57f3c11a8335\": not found" Nov 12 17:51:03.279519 kubelet[2561]: I1112 17:51:03.279491 2561 scope.go:117] "RemoveContainer" containerID="39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b" Nov 12 17:51:03.280462 containerd[1440]: time="2024-11-12T17:51:03.280352049Z" level=info msg="RemoveContainer for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\"" Nov 12 17:51:03.282511 containerd[1440]: time="2024-11-12T17:51:03.282482404Z" level=info msg="RemoveContainer for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" returns successfully" Nov 12 17:51:03.282711 kubelet[2561]: I1112 17:51:03.282690 2561 scope.go:117] "RemoveContainer" containerID="39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b" Nov 12 17:51:03.282938 containerd[1440]: time="2024-11-12T17:51:03.282905820Z" level=error msg="ContainerStatus for \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\": not found" Nov 12 17:51:03.283118 kubelet[2561]: E1112 17:51:03.283086 2561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\": not found" containerID="39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b" Nov 12 17:51:03.283164 kubelet[2561]: I1112 17:51:03.283132 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b"} err="failed to get container status \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\": rpc error: code = NotFound desc = an error occurred when try to find container \"39ee545b632baf21322975139796766a792b8c0e83641ec7d06cb9876285b44b\": not found" Nov 12 17:51:03.338426 kubelet[2561]: I1112 17:51:03.338381 2561 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e29fe01-8217-43d4-851e-89ed47be55b4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338426 kubelet[2561]: I1112 17:51:03.338417 2561 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338426 kubelet[2561]: I1112 17:51:03.338432 2561 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338443 2561 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338453 2561 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fba3be93-ad3e-488a-aa06-a948a70906e4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338488 2561 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338500 2561 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338510 2561 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338519 2561 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338528 2561 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338550 kubelet[2561]: I1112 17:51:03.338538 2561 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338708 kubelet[2561]: I1112 17:51:03.338547 2561 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338708 kubelet[2561]: I1112 17:51:03.338564 2561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gxjlk\" (UniqueName: \"kubernetes.io/projected/0e29fe01-8217-43d4-851e-89ed47be55b4-kube-api-access-gxjlk\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338708 kubelet[2561]: I1112 17:51:03.338574 2561 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338708 kubelet[2561]: I1112 17:51:03.338583 2561 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e29fe01-8217-43d4-851e-89ed47be55b4-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.338708 kubelet[2561]: I1112 17:51:03.338592 2561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-74wcx\" (UniqueName: \"kubernetes.io/projected/fba3be93-ad3e-488a-aa06-a948a70906e4-kube-api-access-74wcx\") on node \"localhost\" DevicePath \"\"" Nov 12 17:51:03.523650 systemd[1]: Removed slice kubepods-burstable-pod0e29fe01_8217_43d4_851e_89ed47be55b4.slice - libcontainer container kubepods-burstable-pod0e29fe01_8217_43d4_851e_89ed47be55b4.slice. Nov 12 17:51:03.523747 systemd[1]: kubepods-burstable-pod0e29fe01_8217_43d4_851e_89ed47be55b4.slice: Consumed 6.582s CPU time. Nov 12 17:51:03.525606 systemd[1]: Removed slice kubepods-besteffort-podfba3be93_ad3e_488a_aa06_a948a70906e4.slice - libcontainer container kubepods-besteffort-podfba3be93_ad3e_488a_aa06_a948a70906e4.slice. Nov 12 17:51:03.900562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be3c0324aef3c91db4e2f4bc2fe94487aa1eeea9eb13f8bc6eaf5070a2d43286-rootfs.mount: Deactivated successfully. Nov 12 17:51:03.900662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3294a9d2e19b60e711cb29dadb74dd7bf69015b264ebe7975901f16739e611b9-rootfs.mount: Deactivated successfully. Nov 12 17:51:03.900709 systemd[1]: var-lib-kubelet-pods-fba3be93\x2dad3e\x2d488a\x2daa06\x2da948a70906e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74wcx.mount: Deactivated successfully. Nov 12 17:51:03.900764 systemd[1]: var-lib-kubelet-pods-0e29fe01\x2d8217\x2d43d4\x2d851e\x2d89ed47be55b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxjlk.mount: Deactivated successfully. Nov 12 17:51:03.900837 systemd[1]: var-lib-kubelet-pods-0e29fe01\x2d8217\x2d43d4\x2d851e\x2d89ed47be55b4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 17:51:03.900885 systemd[1]: var-lib-kubelet-pods-0e29fe01\x2d8217\x2d43d4\x2d851e\x2d89ed47be55b4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 17:51:04.854357 sshd[4184]: pam_unix(sshd:session): session closed for user core Nov 12 17:51:04.861342 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:53120.service: Deactivated successfully. Nov 12 17:51:04.863127 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 17:51:04.864931 systemd[1]: session-23.scope: Consumed 1.891s CPU time. Nov 12 17:51:04.866094 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Nov 12 17:51:04.872029 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:45934.service - OpenSSH per-connection server daemon (10.0.0.1:45934). Nov 12 17:51:04.874736 systemd-logind[1426]: Removed session 23. Nov 12 17:51:04.900350 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 45934 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:51:04.901533 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:51:04.905159 systemd-logind[1426]: New session 24 of user core. Nov 12 17:51:04.917926 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 17:51:05.020130 kubelet[2561]: I1112 17:51:05.019846 2561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" path="/var/lib/kubelet/pods/0e29fe01-8217-43d4-851e-89ed47be55b4/volumes" Nov 12 17:51:05.022761 kubelet[2561]: I1112 17:51:05.022508 2561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fba3be93-ad3e-488a-aa06-a948a70906e4" path="/var/lib/kubelet/pods/fba3be93-ad3e-488a-aa06-a948a70906e4/volumes" Nov 12 17:51:06.023236 sshd[4347]: pam_unix(sshd:session): session closed for user core Nov 12 17:51:06.035488 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:45934.service: Deactivated successfully. Nov 12 17:51:06.040518 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 17:51:06.041003 systemd[1]: session-24.scope: Consumed 1.018s CPU time. Nov 12 17:51:06.044219 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Nov 12 17:51:06.049329 kubelet[2561]: I1112 17:51:06.049291 2561 topology_manager.go:215] "Topology Admit Handler" podUID="55e2e420-79c1-4807-be9c-bd29efd5fc4c" podNamespace="kube-system" podName="cilium-9njkw" Nov 12 17:51:06.052958 kubelet[2561]: E1112 17:51:06.049348 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" containerName="mount-bpf-fs" Nov 12 17:51:06.052958 kubelet[2561]: E1112 17:51:06.049360 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" containerName="clean-cilium-state" Nov 12 17:51:06.052958 kubelet[2561]: E1112 17:51:06.049373 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fba3be93-ad3e-488a-aa06-a948a70906e4" containerName="cilium-operator" Nov 12 17:51:06.052958 kubelet[2561]: E1112 17:51:06.049380 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" containerName="mount-cgroup" Nov 12 17:51:06.052958 kubelet[2561]: E1112 17:51:06.049386 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" containerName="apply-sysctl-overwrites" Nov 12 17:51:06.052958 kubelet[2561]: E1112 17:51:06.049393 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" containerName="cilium-agent" Nov 12 17:51:06.052958 kubelet[2561]: I1112 17:51:06.049414 2561 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba3be93-ad3e-488a-aa06-a948a70906e4" containerName="cilium-operator" Nov 12 17:51:06.052958 kubelet[2561]: I1112 17:51:06.049421 2561 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29fe01-8217-43d4-851e-89ed47be55b4" containerName="cilium-agent" Nov 12 17:51:06.051203 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:45938.service - OpenSSH per-connection server daemon (10.0.0.1:45938). Nov 12 17:51:06.058033 systemd-logind[1426]: Removed session 24. Nov 12 17:51:06.065452 systemd[1]: Created slice kubepods-burstable-pod55e2e420_79c1_4807_be9c_bd29efd5fc4c.slice - libcontainer container kubepods-burstable-pod55e2e420_79c1_4807_be9c_bd29efd5fc4c.slice. Nov 12 17:51:06.103710 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 45938 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:51:06.105387 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:51:06.112682 systemd-logind[1426]: New session 25 of user core. Nov 12 17:51:06.116977 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 17:51:06.153573 kubelet[2561]: I1112 17:51:06.153538 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-etc-cni-netd\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.153817 kubelet[2561]: I1112 17:51:06.153803 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-cni-path\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.153923 kubelet[2561]: I1112 17:51:06.153913 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-cilium-run\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154395 kubelet[2561]: I1112 17:51:06.154036 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-bpf-maps\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154395 kubelet[2561]: I1112 17:51:06.154062 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-cilium-cgroup\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154395 kubelet[2561]: I1112 17:51:06.154083 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55e2e420-79c1-4807-be9c-bd29efd5fc4c-clustermesh-secrets\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154395 kubelet[2561]: I1112 17:51:06.154100 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/55e2e420-79c1-4807-be9c-bd29efd5fc4c-cilium-ipsec-secrets\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154395 kubelet[2561]: I1112 17:51:06.154121 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-hostproc\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154395 kubelet[2561]: I1112 17:51:06.154139 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-xtables-lock\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154614 kubelet[2561]: I1112 17:51:06.154159 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55e2e420-79c1-4807-be9c-bd29efd5fc4c-cilium-config-path\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154614 kubelet[2561]: I1112 17:51:06.154180 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gs7\" (UniqueName: \"kubernetes.io/projected/55e2e420-79c1-4807-be9c-bd29efd5fc4c-kube-api-access-52gs7\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154614 kubelet[2561]: I1112 17:51:06.154200 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-lib-modules\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154614 kubelet[2561]: I1112 17:51:06.154224 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-host-proc-sys-kernel\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154614 kubelet[2561]: I1112 17:51:06.154246 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55e2e420-79c1-4807-be9c-bd29efd5fc4c-hubble-tls\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.154754 kubelet[2561]: I1112 17:51:06.154267 2561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55e2e420-79c1-4807-be9c-bd29efd5fc4c-host-proc-sys-net\") pod \"cilium-9njkw\" (UID: \"55e2e420-79c1-4807-be9c-bd29efd5fc4c\") " pod="kube-system/cilium-9njkw" Nov 12 17:51:06.165409 sshd[4360]: pam_unix(sshd:session): session closed for user core Nov 12 17:51:06.180528 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:45938.service: Deactivated successfully. Nov 12 17:51:06.182815 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 17:51:06.184327 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. Nov 12 17:51:06.185668 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:45944.service - OpenSSH per-connection server daemon (10.0.0.1:45944). Nov 12 17:51:06.187544 systemd-logind[1426]: Removed session 25. Nov 12 17:51:06.218261 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 45944 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:51:06.219616 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:51:06.223199 systemd-logind[1426]: New session 26 of user core. Nov 12 17:51:06.233946 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 17:51:06.371021 kubelet[2561]: E1112 17:51:06.370884 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:06.371624 containerd[1440]: time="2024-11-12T17:51:06.371585993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9njkw,Uid:55e2e420-79c1-4807-be9c-bd29efd5fc4c,Namespace:kube-system,Attempt:0,}" Nov 12 17:51:06.388583 containerd[1440]: time="2024-11-12T17:51:06.388277044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:51:06.388583 containerd[1440]: time="2024-11-12T17:51:06.388333841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:51:06.388583 containerd[1440]: time="2024-11-12T17:51:06.388427396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:51:06.389047 containerd[1440]: time="2024-11-12T17:51:06.388997609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:51:06.415962 systemd[1]: Started cri-containerd-6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337.scope - libcontainer container 6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337. Nov 12 17:51:06.438287 containerd[1440]: time="2024-11-12T17:51:06.438159803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9njkw,Uid:55e2e420-79c1-4807-be9c-bd29efd5fc4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\"" Nov 12 17:51:06.438957 kubelet[2561]: E1112 17:51:06.438908 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:06.441589 containerd[1440]: time="2024-11-12T17:51:06.441556762Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 17:51:06.452250 containerd[1440]: time="2024-11-12T17:51:06.452086944Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563\"" Nov 12 17:51:06.453188 containerd[1440]: time="2024-11-12T17:51:06.453124575Z" level=info msg="StartContainer for \"ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563\"" Nov 12 17:51:06.478005 systemd[1]: Started cri-containerd-ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563.scope - libcontainer container ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563. Nov 12 17:51:06.501080 containerd[1440]: time="2024-11-12T17:51:06.501020909Z" level=info msg="StartContainer for \"ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563\" returns successfully" Nov 12 17:51:06.522716 systemd[1]: cri-containerd-ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563.scope: Deactivated successfully. Nov 12 17:51:06.548253 containerd[1440]: time="2024-11-12T17:51:06.548198236Z" level=info msg="shim disconnected" id=ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563 namespace=k8s.io Nov 12 17:51:06.548253 containerd[1440]: time="2024-11-12T17:51:06.548251074Z" level=warning msg="cleaning up after shim disconnected" id=ac6afaf7cba10a8d9ba140690cd995e3bd21960b3bd38823210816b6d5fae563 namespace=k8s.io Nov 12 17:51:06.548253 containerd[1440]: time="2024-11-12T17:51:06.548260473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:07.060192 kubelet[2561]: E1112 17:51:07.060162 2561 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 17:51:07.228076 kubelet[2561]: E1112 17:51:07.228019 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:07.230800 containerd[1440]: time="2024-11-12T17:51:07.230688823Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 17:51:07.251233 containerd[1440]: time="2024-11-12T17:51:07.251182525Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722\"" Nov 12 17:51:07.252220 containerd[1440]: time="2024-11-12T17:51:07.251623265Z" level=info msg="StartContainer for \"106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722\"" Nov 12 17:51:07.278179 systemd[1]: run-containerd-runc-k8s.io-106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722-runc.kEv5jn.mount: Deactivated successfully. Nov 12 17:51:07.296957 systemd[1]: Started cri-containerd-106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722.scope - libcontainer container 106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722. Nov 12 17:51:07.320404 containerd[1440]: time="2024-11-12T17:51:07.320309855Z" level=info msg="StartContainer for \"106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722\" returns successfully" Nov 12 17:51:07.327191 systemd[1]: cri-containerd-106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722.scope: Deactivated successfully. Nov 12 17:51:07.347381 containerd[1440]: time="2024-11-12T17:51:07.347323112Z" level=info msg="shim disconnected" id=106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722 namespace=k8s.io Nov 12 17:51:07.347381 containerd[1440]: time="2024-11-12T17:51:07.347370469Z" level=warning msg="cleaning up after shim disconnected" id=106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722 namespace=k8s.io Nov 12 17:51:07.347381 containerd[1440]: time="2024-11-12T17:51:07.347379909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:08.232073 kubelet[2561]: E1112 17:51:08.231490 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:08.234644 containerd[1440]: time="2024-11-12T17:51:08.234609501Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 17:51:08.244930 containerd[1440]: time="2024-11-12T17:51:08.244879286Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a\"" Nov 12 17:51:08.245445 containerd[1440]: time="2024-11-12T17:51:08.245411944Z" level=info msg="StartContainer for \"8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a\"" Nov 12 17:51:08.260498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-106988c990daaef6ecb65f631377926e8f9ea898501e1fe13490991c30427722-rootfs.mount: Deactivated successfully. Nov 12 17:51:08.292945 systemd[1]: Started cri-containerd-8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a.scope - libcontainer container 8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a. Nov 12 17:51:08.315594 systemd[1]: cri-containerd-8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a.scope: Deactivated successfully. Nov 12 17:51:08.315747 containerd[1440]: time="2024-11-12T17:51:08.315713622Z" level=info msg="StartContainer for \"8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a\" returns successfully" Nov 12 17:51:08.334171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a-rootfs.mount: Deactivated successfully. Nov 12 17:51:08.338617 containerd[1440]: time="2024-11-12T17:51:08.338566898Z" level=info msg="shim disconnected" id=8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a namespace=k8s.io Nov 12 17:51:08.338714 containerd[1440]: time="2024-11-12T17:51:08.338620656Z" level=warning msg="cleaning up after shim disconnected" id=8ca21c125dcd4ce29789728bdf1c89ad4a2ad9144a5da3060c63801d3ba0653a namespace=k8s.io Nov 12 17:51:08.338714 containerd[1440]: time="2024-11-12T17:51:08.338630015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:08.755182 kubelet[2561]: I1112 17:51:08.755149 2561 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T17:51:08Z","lastTransitionTime":"2024-11-12T17:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 17:51:09.235837 kubelet[2561]: E1112 17:51:09.235101 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:09.238927 containerd[1440]: time="2024-11-12T17:51:09.238558850Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 17:51:09.287498 containerd[1440]: time="2024-11-12T17:51:09.287375357Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95\"" Nov 12 17:51:09.288296 containerd[1440]: time="2024-11-12T17:51:09.288003814Z" level=info msg="StartContainer for \"a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95\"" Nov 12 17:51:09.320970 systemd[1]: Started cri-containerd-a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95.scope - libcontainer container a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95. Nov 12 17:51:09.340403 systemd[1]: cri-containerd-a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95.scope: Deactivated successfully. Nov 12 17:51:09.342070 containerd[1440]: time="2024-11-12T17:51:09.341759777Z" level=info msg="StartContainer for \"a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95\" returns successfully" Nov 12 17:51:09.363139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95-rootfs.mount: Deactivated successfully. Nov 12 17:51:09.377316 containerd[1440]: time="2024-11-12T17:51:09.377247938Z" level=info msg="shim disconnected" id=a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95 namespace=k8s.io Nov 12 17:51:09.377316 containerd[1440]: time="2024-11-12T17:51:09.377302936Z" level=warning msg="cleaning up after shim disconnected" id=a997940e8ec2c2a0181eb422f80340ec86f6f04d0c7ab6ac44229a2a19b5ef95 namespace=k8s.io Nov 12 17:51:09.377316 containerd[1440]: time="2024-11-12T17:51:09.377311456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:51:10.017686 kubelet[2561]: E1112 17:51:10.017588 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:10.238614 kubelet[2561]: E1112 17:51:10.238586 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:10.241764 containerd[1440]: time="2024-11-12T17:51:10.241707029Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 17:51:10.260806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217157855.mount: Deactivated successfully. Nov 12 17:51:10.261623 containerd[1440]: time="2024-11-12T17:51:10.261567875Z" level=info msg="CreateContainer within sandbox \"6dfc439889d408e6cf81b9355a7ac8c09653c7bddab4d882847b0b2e67d71337\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6206626f086a1214f7af9fee50d506f33ad4edfa6ec2f5e73c81ef53501b178a\"" Nov 12 17:51:10.262114 containerd[1440]: time="2024-11-12T17:51:10.262074257Z" level=info msg="StartContainer for \"6206626f086a1214f7af9fee50d506f33ad4edfa6ec2f5e73c81ef53501b178a\"" Nov 12 17:51:10.290937 systemd[1]: Started cri-containerd-6206626f086a1214f7af9fee50d506f33ad4edfa6ec2f5e73c81ef53501b178a.scope - libcontainer container 6206626f086a1214f7af9fee50d506f33ad4edfa6ec2f5e73c81ef53501b178a. Nov 12 17:51:10.313147 containerd[1440]: time="2024-11-12T17:51:10.313049526Z" level=info msg="StartContainer for \"6206626f086a1214f7af9fee50d506f33ad4edfa6ec2f5e73c81ef53501b178a\" returns successfully" Nov 12 17:51:10.328383 systemd[1]: run-containerd-runc-k8s.io-6206626f086a1214f7af9fee50d506f33ad4edfa6ec2f5e73c81ef53501b178a-runc.R2TvFd.mount: Deactivated successfully. Nov 12 17:51:10.586837 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 12 17:51:11.245418 kubelet[2561]: E1112 17:51:11.245389 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:12.372440 kubelet[2561]: E1112 17:51:12.372278 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:13.365928 systemd-networkd[1384]: lxc_health: Link UP Nov 12 17:51:13.379638 systemd-networkd[1384]: lxc_health: Gained carrier Nov 12 17:51:14.374081 kubelet[2561]: E1112 17:51:14.374041 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:14.393284 kubelet[2561]: I1112 17:51:14.393201 2561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9njkw" podStartSLOduration=8.393162815 podStartE2EDuration="8.393162815s" podCreationTimestamp="2024-11-12 17:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:51:11.25993316 +0000 UTC m=+84.343559356" watchObservedRunningTime="2024-11-12 17:51:14.393162815 +0000 UTC m=+87.476789011" Nov 12 17:51:15.116917 systemd-networkd[1384]: lxc_health: Gained IPv6LL Nov 12 17:51:15.252632 kubelet[2561]: E1112 17:51:15.252590 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:16.254623 kubelet[2561]: E1112 17:51:16.254569 2561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:51:19.012499 sshd[4369]: pam_unix(sshd:session): session closed for user core Nov 12 17:51:19.016009 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:45944.service: Deactivated successfully. Nov 12 17:51:19.017718 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 17:51:19.019049 systemd-logind[1426]: Session 26 logged out. Waiting for processes to exit. Nov 12 17:51:19.020170 systemd-logind[1426]: Removed session 26.