Nov 12 17:38:59.937605 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 17:38:59.937626 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:38:59.937636 kernel: KASLR enabled Nov 12 17:38:59.937642 kernel: efi: EFI v2.7 by EDK II Nov 12 17:38:59.937648 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Nov 12 17:38:59.937654 kernel: random: crng init done Nov 12 17:38:59.937662 kernel: ACPI: Early table checksum verification disabled Nov 12 17:38:59.937668 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Nov 12 17:38:59.937675 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 17:38:59.937683 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937689 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937696 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937702 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937798 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937807 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937816 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937823 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937830 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:38:59.937837 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 17:38:59.937843 kernel: NUMA: Failed to initialise from firmware Nov 12 17:38:59.937851 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:38:59.937857 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Nov 12 17:38:59.937865 kernel: Zone ranges: Nov 12 17:38:59.937871 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:38:59.937878 kernel: DMA32 empty Nov 12 17:38:59.937887 kernel: Normal empty Nov 12 17:38:59.937893 kernel: Movable zone start for each node Nov 12 17:38:59.937900 kernel: Early memory node ranges Nov 12 17:38:59.937907 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 17:38:59.937914 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 17:38:59.937921 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 17:38:59.937927 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 17:38:59.937934 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 17:38:59.937941 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 17:38:59.937947 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 17:38:59.937954 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:38:59.937961 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 17:38:59.937969 kernel: psci: probing for conduit method from ACPI. Nov 12 17:38:59.937976 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 17:38:59.937983 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:38:59.937992 kernel: psci: Trusted OS migration not required Nov 12 17:38:59.937999 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:38:59.938007 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 17:38:59.938015 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:38:59.938023 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:38:59.938030 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 17:38:59.938037 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:38:59.938044 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:38:59.938051 kernel: CPU features: detected: Hardware dirty bit management Nov 12 17:38:59.938058 kernel: CPU features: detected: Spectre-v4 Nov 12 17:38:59.938066 kernel: CPU features: detected: Spectre-BHB Nov 12 17:38:59.938073 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 17:38:59.938080 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 17:38:59.938088 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 17:38:59.938096 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 17:38:59.938103 kernel: alternatives: applying boot alternatives Nov 12 17:38:59.938111 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:38:59.938119 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:38:59.938126 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:38:59.938133 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:38:59.938140 kernel: Fallback order for Node 0: 0 Nov 12 17:38:59.938148 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 17:38:59.938155 kernel: Policy zone: DMA Nov 12 17:38:59.938162 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:38:59.938170 kernel: software IO TLB: area num 4. Nov 12 17:38:59.938177 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 17:38:59.938185 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Nov 12 17:38:59.938192 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 17:38:59.938199 kernel: trace event string verifier disabled Nov 12 17:38:59.938207 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:38:59.938214 kernel: rcu: RCU event tracing is enabled. Nov 12 17:38:59.938222 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 17:38:59.938229 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:38:59.938236 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:38:59.938244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:38:59.938251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 17:38:59.938260 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:38:59.938278 kernel: GICv3: 256 SPIs implemented Nov 12 17:38:59.938285 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:38:59.938292 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:38:59.938300 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 17:38:59.938307 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 17:38:59.938314 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 17:38:59.938321 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:38:59.938329 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:38:59.938336 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 17:38:59.938343 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 17:38:59.938352 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:38:59.938359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:38:59.938366 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 17:38:59.938374 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 17:38:59.938381 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 17:38:59.938388 kernel: arm-pv: using stolen time PV Nov 12 17:38:59.938396 kernel: Console: colour dummy device 80x25 Nov 12 17:38:59.938403 kernel: ACPI: Core revision 20230628 Nov 12 17:38:59.938411 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 17:38:59.938418 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:38:59.938427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:38:59.938434 kernel: landlock: Up and running. Nov 12 17:38:59.938441 kernel: SELinux: Initializing. Nov 12 17:38:59.938449 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:38:59.938456 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:38:59.938464 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:38:59.938472 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:38:59.938479 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:38:59.938494 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:38:59.938503 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 17:38:59.938511 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 17:38:59.938518 kernel: Remapping and enabling EFI services. Nov 12 17:38:59.938525 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:38:59.938533 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:38:59.938540 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 17:38:59.938548 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 17:38:59.938555 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:38:59.938563 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 17:38:59.938570 kernel: Detected PIPT I-cache on CPU2 Nov 12 17:38:59.938579 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 17:38:59.938587 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 17:38:59.938599 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:38:59.938608 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 17:38:59.938616 kernel: Detected PIPT I-cache on CPU3 Nov 12 17:38:59.938626 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 17:38:59.938634 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 17:38:59.938642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:38:59.938649 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 17:38:59.938659 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 17:38:59.938667 kernel: SMP: Total of 4 processors activated. Nov 12 17:38:59.938675 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:38:59.938683 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 17:38:59.938691 kernel: CPU features: detected: Common not Private translations Nov 12 17:38:59.938699 kernel: CPU features: detected: CRC32 instructions Nov 12 17:38:59.938713 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 17:38:59.938721 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 17:38:59.938730 kernel: CPU features: detected: LSE atomic instructions Nov 12 17:38:59.938738 kernel: CPU features: detected: Privileged Access Never Nov 12 17:38:59.938746 kernel: CPU features: detected: RAS Extension Support Nov 12 17:38:59.938754 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 17:38:59.938761 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:38:59.938769 kernel: alternatives: applying system-wide alternatives Nov 12 17:38:59.938777 kernel: devtmpfs: initialized Nov 12 17:38:59.938785 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:38:59.938793 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 17:38:59.938802 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:38:59.938810 kernel: SMBIOS 3.0.0 present. Nov 12 17:38:59.938818 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Nov 12 17:38:59.938826 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:38:59.938833 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:38:59.938841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:38:59.938849 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:38:59.938857 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:38:59.938865 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 12 17:38:59.938874 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:38:59.938883 kernel: cpuidle: using governor menu Nov 12 17:38:59.938890 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:38:59.938898 kernel: ASID allocator initialised with 32768 entries Nov 12 17:38:59.938906 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:38:59.938914 kernel: Serial: AMBA PL011 UART driver Nov 12 17:38:59.938922 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 17:38:59.938929 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 17:38:59.938937 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:38:59.938947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:38:59.938954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:38:59.938962 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:38:59.938970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:38:59.938978 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:38:59.938985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:38:59.938993 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:38:59.939001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:38:59.939008 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:38:59.939018 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:38:59.939025 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:38:59.939033 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:38:59.939041 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:38:59.939049 kernel: ACPI: Interpreter enabled Nov 12 17:38:59.939057 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:38:59.939065 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:38:59.939072 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 17:38:59.939080 kernel: printk: console [ttyAMA0] enabled Nov 12 17:38:59.939089 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 17:38:59.939235 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:38:59.939319 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:38:59.939393 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:38:59.939463 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 17:38:59.939541 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 17:38:59.939553 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 17:38:59.939564 kernel: PCI host bridge to bus 0000:00 Nov 12 17:38:59.939640 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 17:38:59.939714 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:38:59.939799 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 17:38:59.939865 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 17:38:59.939954 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 17:38:59.940043 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 17:38:59.940122 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 17:38:59.940196 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 17:38:59.940268 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:38:59.940340 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:38:59.940413 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 17:38:59.940492 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 17:38:59.940560 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 17:38:59.940627 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:38:59.940691 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 17:38:59.940702 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:38:59.940733 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:38:59.940741 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:38:59.940749 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:38:59.940757 kernel: iommu: Default domain type: Translated Nov 12 17:38:59.940765 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:38:59.940776 kernel: efivars: Registered efivars operations Nov 12 17:38:59.940784 kernel: vgaarb: loaded Nov 12 17:38:59.940792 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:38:59.940800 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:38:59.940808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:38:59.940816 kernel: pnp: PnP ACPI init Nov 12 17:38:59.940895 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 17:38:59.940907 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:38:59.940917 kernel: NET: Registered PF_INET protocol family Nov 12 17:38:59.940937 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:38:59.940946 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:38:59.940955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:38:59.940963 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:38:59.940971 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:38:59.940979 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:38:59.940987 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:38:59.940995 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:38:59.941004 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:38:59.941012 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:38:59.941020 kernel: kvm [1]: HYP mode not available Nov 12 17:38:59.941028 kernel: Initialise system trusted keyrings Nov 12 17:38:59.941036 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:38:59.941044 kernel: Key type asymmetric registered Nov 12 17:38:59.941052 kernel: Asymmetric key parser 'x509' registered Nov 12 17:38:59.941060 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:38:59.941068 kernel: io scheduler mq-deadline registered Nov 12 17:38:59.941077 kernel: io scheduler kyber registered Nov 12 17:38:59.941085 kernel: io scheduler bfq registered Nov 12 17:38:59.941093 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:38:59.941101 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:38:59.941109 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:38:59.941179 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 17:38:59.941189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:38:59.941197 kernel: thunder_xcv, ver 1.0 Nov 12 17:38:59.941205 kernel: thunder_bgx, ver 1.0 Nov 12 17:38:59.941215 kernel: nicpf, ver 1.0 Nov 12 17:38:59.941223 kernel: nicvf, ver 1.0 Nov 12 17:38:59.941297 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:38:59.941361 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:38:59 UTC (1731433139) Nov 12 17:38:59.941372 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:38:59.941380 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 17:38:59.941388 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:38:59.941396 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:38:59.941405 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:38:59.941413 kernel: Segment Routing with IPv6 Nov 12 17:38:59.941421 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:38:59.941429 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:38:59.941437 kernel: Key type dns_resolver registered Nov 12 17:38:59.941444 kernel: registered taskstats version 1 Nov 12 17:38:59.941452 kernel: Loading compiled-in X.509 certificates Nov 12 17:38:59.941460 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:38:59.941468 kernel: Key type .fscrypt registered Nov 12 17:38:59.941477 kernel: Key type fscrypt-provisioning registered Nov 12 17:38:59.941492 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:38:59.941500 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:38:59.941508 kernel: ima: No architecture policies found Nov 12 17:38:59.941516 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:38:59.941524 kernel: clk: Disabling unused clocks Nov 12 17:38:59.941531 kernel: Freeing unused kernel memory: 39360K Nov 12 17:38:59.941539 kernel: Run /init as init process Nov 12 17:38:59.941547 kernel: with arguments: Nov 12 17:38:59.941557 kernel: /init Nov 12 17:38:59.941565 kernel: with environment: Nov 12 17:38:59.941572 kernel: HOME=/ Nov 12 17:38:59.941580 kernel: TERM=linux Nov 12 17:38:59.941588 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:38:59.941598 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:38:59.941608 systemd[1]: Detected virtualization kvm. Nov 12 17:38:59.941616 systemd[1]: Detected architecture arm64. Nov 12 17:38:59.941626 systemd[1]: Running in initrd. Nov 12 17:38:59.941634 systemd[1]: No hostname configured, using default hostname. Nov 12 17:38:59.941642 systemd[1]: Hostname set to . Nov 12 17:38:59.941651 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:38:59.941663 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:38:59.941671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:38:59.941680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:38:59.941689 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:38:59.941699 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:38:59.941716 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:38:59.941725 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:38:59.941735 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:38:59.941744 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:38:59.941753 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:38:59.941761 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:38:59.941771 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:38:59.941780 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:38:59.941788 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:38:59.941797 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:38:59.941805 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:38:59.941814 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:38:59.941822 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:38:59.941831 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:38:59.941841 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:38:59.941849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:38:59.941858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:38:59.941866 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:38:59.941875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:38:59.941883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:38:59.941892 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:38:59.941900 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:38:59.941909 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:38:59.941919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:38:59.941927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:38:59.941936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:38:59.941944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:38:59.941952 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:38:59.941961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:38:59.941990 systemd-journald[238]: Collecting audit messages is disabled. Nov 12 17:38:59.942012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:38:59.942022 systemd-journald[238]: Journal started Nov 12 17:38:59.942041 systemd-journald[238]: Runtime Journal (/run/log/journal/d542115b0ce1487590eb8ce435d1e94e) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:38:59.932678 systemd-modules-load[239]: Inserted module 'overlay' Nov 12 17:38:59.945205 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:38:59.945594 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:38:59.950730 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:38:59.952673 systemd-modules-load[239]: Inserted module 'br_netfilter' Nov 12 17:38:59.953533 kernel: Bridge firewalling registered Nov 12 17:38:59.955842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:38:59.957554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:38:59.961637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:38:59.962954 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:38:59.967466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:38:59.972525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:38:59.974093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:38:59.976427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:38:59.979892 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:38:59.981700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:38:59.984137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:38:59.993632 dracut-cmdline[275]: dracut-dracut-053 Nov 12 17:38:59.996086 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:39:00.013265 systemd-resolved[277]: Positive Trust Anchors: Nov 12 17:39:00.013285 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:39:00.013316 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:39:00.018231 systemd-resolved[277]: Defaulting to hostname 'linux'. Nov 12 17:39:00.019414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:39:00.022968 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:39:00.070769 kernel: SCSI subsystem initialized Nov 12 17:39:00.075726 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:39:00.083734 kernel: iscsi: registered transport (tcp) Nov 12 17:39:00.096755 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:39:00.096780 kernel: QLogic iSCSI HBA Driver Nov 12 17:39:00.142515 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:39:00.157851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:39:00.177447 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:39:00.177496 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:39:00.177528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:39:00.224745 kernel: raid6: neonx8 gen() 15769 MB/s Nov 12 17:39:00.241726 kernel: raid6: neonx4 gen() 15653 MB/s Nov 12 17:39:00.258727 kernel: raid6: neonx2 gen() 13199 MB/s Nov 12 17:39:00.275737 kernel: raid6: neonx1 gen() 10461 MB/s Nov 12 17:39:00.292735 kernel: raid6: int64x8 gen() 6943 MB/s Nov 12 17:39:00.309731 kernel: raid6: int64x4 gen() 7308 MB/s Nov 12 17:39:00.326739 kernel: raid6: int64x2 gen() 6124 MB/s Nov 12 17:39:00.343885 kernel: raid6: int64x1 gen() 5049 MB/s Nov 12 17:39:00.343927 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Nov 12 17:39:00.361856 kernel: raid6: .... xor() 11910 MB/s, rmw enabled Nov 12 17:39:00.361901 kernel: raid6: using neon recovery algorithm Nov 12 17:39:00.366731 kernel: xor: measuring software checksum speed Nov 12 17:39:00.368005 kernel: 8regs : 17563 MB/sec Nov 12 17:39:00.368020 kernel: 32regs : 19646 MB/sec Nov 12 17:39:00.369352 kernel: arm64_neon : 26822 MB/sec Nov 12 17:39:00.369375 kernel: xor: using function: arm64_neon (26822 MB/sec) Nov 12 17:39:00.420745 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:39:00.432419 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:39:00.441860 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:39:00.455394 systemd-udevd[460]: Using default interface naming scheme 'v255'. Nov 12 17:39:00.458551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:39:00.462271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:39:00.476984 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Nov 12 17:39:00.504005 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:39:00.519912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:39:00.560005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:39:00.567894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:39:00.584741 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:39:00.586298 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:39:00.588180 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:39:00.590449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:39:00.601585 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:39:00.606303 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 17:39:00.613816 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 17:39:00.613923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:39:00.613935 kernel: GPT:9289727 != 19775487 Nov 12 17:39:00.613945 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:39:00.613955 kernel: GPT:9289727 != 19775487 Nov 12 17:39:00.613964 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:39:00.613981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:39:00.617272 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:39:00.623445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:39:00.623577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:39:00.629642 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:39:00.633363 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Nov 12 17:39:00.633387 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (516) Nov 12 17:39:00.630782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:39:00.630942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:39:00.634470 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:39:00.649975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:39:00.657975 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 17:39:00.662232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:39:00.670743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:39:00.676807 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 17:39:00.680658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 17:39:00.681884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 17:39:00.690838 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:39:00.692638 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:39:00.698095 disk-uuid[551]: Primary Header is updated. Nov 12 17:39:00.698095 disk-uuid[551]: Secondary Entries is updated. Nov 12 17:39:00.698095 disk-uuid[551]: Secondary Header is updated. Nov 12 17:39:00.701728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:39:00.718281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:39:01.714736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:39:01.715117 disk-uuid[552]: The operation has completed successfully. Nov 12 17:39:01.744822 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:39:01.744925 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:39:01.768881 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:39:01.771666 sh[575]: Success Nov 12 17:39:01.783739 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:39:01.823082 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:39:01.833511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:39:01.837217 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:39:01.849428 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:39:01.849483 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:39:01.849498 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:39:01.850621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:39:01.851902 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:39:01.856829 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:39:01.857844 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:39:01.866899 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:39:01.869137 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:39:01.878215 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:39:01.878261 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:39:01.878273 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:39:01.881729 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:39:01.889212 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:39:01.891252 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:39:01.899032 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:39:01.906900 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:39:01.982638 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:39:01.991767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:39:02.022712 systemd-networkd[766]: lo: Link UP Nov 12 17:39:02.022723 systemd-networkd[766]: lo: Gained carrier Nov 12 17:39:02.023636 systemd-networkd[766]: Enumeration completed Nov 12 17:39:02.024345 ignition[666]: Ignition 2.19.0 Nov 12 17:39:02.024317 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:39:02.024351 ignition[666]: Stage: fetch-offline Nov 12 17:39:02.024320 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:39:02.024387 ignition[666]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:39:02.024472 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:39:02.024395 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:39:02.025847 systemd-networkd[766]: eth0: Link UP Nov 12 17:39:02.024553 ignition[666]: parsed url from cmdline: "" Nov 12 17:39:02.025851 systemd-networkd[766]: eth0: Gained carrier Nov 12 17:39:02.024556 ignition[666]: no config URL provided Nov 12 17:39:02.025858 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:39:02.024560 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:39:02.027832 systemd[1]: Reached target network.target - Network. Nov 12 17:39:02.024568 ignition[666]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:39:02.024590 ignition[666]: op(1): [started] loading QEMU firmware config module Nov 12 17:39:02.024594 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 17:39:02.035462 ignition[666]: op(1): [finished] loading QEMU firmware config module Nov 12 17:39:02.046748 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:39:02.081920 ignition[666]: parsing config with SHA512: 9bb807e2e78f9743fda3548fa1d90eeb437f9a85e51960a14bbbebb8c7603532c83df081e74c9c512b1c7c2ddf031429e3b32ad66fad572f7eb3d0fb0fa60915 Nov 12 17:39:02.087700 unknown[666]: fetched base config from "system" Nov 12 17:39:02.088546 unknown[666]: fetched user config from "qemu" Nov 12 17:39:02.089050 ignition[666]: fetch-offline: fetch-offline passed Nov 12 17:39:02.089119 ignition[666]: Ignition finished successfully Nov 12 17:39:02.092754 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:39:02.094347 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 17:39:02.103903 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:39:02.115174 ignition[774]: Ignition 2.19.0 Nov 12 17:39:02.115184 ignition[774]: Stage: kargs Nov 12 17:39:02.115340 ignition[774]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:39:02.115350 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:39:02.116426 ignition[774]: kargs: kargs passed Nov 12 17:39:02.119077 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:39:02.116472 ignition[774]: Ignition finished successfully Nov 12 17:39:02.129895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:39:02.139747 ignition[782]: Ignition 2.19.0 Nov 12 17:39:02.139759 ignition[782]: Stage: disks Nov 12 17:39:02.139932 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:39:02.139942 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:39:02.143081 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:39:02.140830 ignition[782]: disks: disks passed Nov 12 17:39:02.145444 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:39:02.140876 ignition[782]: Ignition finished successfully Nov 12 17:39:02.147147 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:39:02.148926 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:39:02.149883 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:39:02.151678 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:39:02.170086 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:39:02.181009 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:39:02.185422 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:39:02.189033 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:39:02.239727 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:39:02.239891 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:39:02.241107 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:39:02.248834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:39:02.250543 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:39:02.252799 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:39:02.252850 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:39:02.252871 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:39:02.263175 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Nov 12 17:39:02.263198 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:39:02.263210 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:39:02.263220 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:39:02.257128 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:39:02.260172 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:39:02.267571 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:39:02.268985 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:39:02.309662 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:39:02.314151 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:39:02.317469 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:39:02.321581 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:39:02.399635 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:39:02.409842 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:39:02.411409 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:39:02.417722 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:39:02.434983 ignition[914]: INFO : Ignition 2.19.0 Nov 12 17:39:02.434983 ignition[914]: INFO : Stage: mount Nov 12 17:39:02.436600 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:39:02.436600 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:39:02.440490 ignition[914]: INFO : mount: mount passed Nov 12 17:39:02.440490 ignition[914]: INFO : Ignition finished successfully Nov 12 17:39:02.438466 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:39:02.439697 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:39:02.450927 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:39:02.847788 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:39:02.862026 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:39:02.869619 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Nov 12 17:39:02.869659 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:39:02.869671 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:39:02.870602 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:39:02.873747 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:39:02.874980 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:39:02.895131 ignition[943]: INFO : Ignition 2.19.0 Nov 12 17:39:02.895131 ignition[943]: INFO : Stage: files Nov 12 17:39:02.896782 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:39:02.896782 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:39:02.896782 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:39:02.900127 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:39:02.900127 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:39:02.900127 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:39:02.900127 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:39:02.900127 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:39:02.899465 unknown[943]: wrote ssh authorized keys file for user: core Nov 12 17:39:02.907559 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 17:39:02.907559 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 17:39:02.907559 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:39:02.907559 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:39:02.948719 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 17:39:03.044677 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:39:03.044677 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 17:39:03.048388 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 12 17:39:03.350161 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 12 17:39:03.432898 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 17:39:03.432898 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:39:03.436451 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 17:39:03.488917 systemd-networkd[766]: eth0: Gained IPv6LL Nov 12 17:39:03.674908 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 12 17:39:03.941135 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:39:03.941135 ignition[943]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 12 17:39:03.944844 ignition[943]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 17:39:03.972107 ignition[943]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:39:03.976190 ignition[943]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:39:03.978844 ignition[943]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 17:39:03.978844 ignition[943]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:39:03.978844 ignition[943]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:39:03.978844 ignition[943]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:39:03.978844 ignition[943]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:39:03.978844 ignition[943]: INFO : files: files passed Nov 12 17:39:03.978844 ignition[943]: INFO : Ignition finished successfully Nov 12 17:39:03.980155 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:39:03.993001 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:39:03.995724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:39:03.998874 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:39:03.998970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:39:04.003930 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 17:39:04.005879 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:39:04.005879 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:39:04.008900 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:39:04.008022 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:39:04.010347 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:39:04.018037 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:39:04.037009 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:39:04.037121 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:39:04.039277 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:39:04.041139 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:39:04.042898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:39:04.043662 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:39:04.059281 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:39:04.068885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:39:04.076135 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:39:04.077335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:39:04.079336 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:39:04.081097 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:39:04.081207 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:39:04.083763 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:39:04.085758 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:39:04.087407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:39:04.089100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:39:04.090983 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:39:04.092875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:39:04.094676 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:39:04.096620 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:39:04.098562 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:39:04.100267 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:39:04.101760 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:39:04.101874 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:39:04.104190 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:39:04.106085 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:39:04.107979 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:39:04.108829 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:39:04.110052 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:39:04.110161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:39:04.112834 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:39:04.112942 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:39:04.114930 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:39:04.116467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:39:04.120761 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:39:04.122007 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:39:04.124052 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:39:04.125595 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:39:04.125678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:39:04.127205 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:39:04.127285 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:39:04.128793 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:39:04.128900 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:39:04.130666 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:39:04.130779 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:39:04.142884 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:39:04.144491 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:39:04.145411 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:39:04.145542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:39:04.147450 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:39:04.147561 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:39:04.153950 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:39:04.155738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:39:04.160589 ignition[999]: INFO : Ignition 2.19.0 Nov 12 17:39:04.160589 ignition[999]: INFO : Stage: umount Nov 12 17:39:04.160589 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:39:04.160589 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:39:04.160589 ignition[999]: INFO : umount: umount passed Nov 12 17:39:04.160589 ignition[999]: INFO : Ignition finished successfully Nov 12 17:39:04.159989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:39:04.160793 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:39:04.160883 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:39:04.163497 systemd[1]: Stopped target network.target - Network. Nov 12 17:39:04.164684 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:39:04.164781 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:39:04.166595 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:39:04.166645 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:39:04.168457 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:39:04.168511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:39:04.170634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:39:04.170685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:39:04.173508 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:39:04.175392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:39:04.187411 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:39:04.187539 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:39:04.189521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:39:04.189581 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:39:04.191782 systemd-networkd[766]: eth0: DHCPv6 lease lost Nov 12 17:39:04.193407 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:39:04.193553 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:39:04.195054 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:39:04.195085 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:39:04.206924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:39:04.207880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:39:04.207945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:39:04.209909 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:39:04.209960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:39:04.211807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:39:04.211859 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:39:04.214008 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:39:04.234049 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:39:04.234186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:39:04.236515 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:39:04.236596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:39:04.238290 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:39:04.238365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:39:04.240888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:39:04.240935 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:39:04.242477 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:39:04.242513 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:39:04.244308 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:39:04.244357 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:39:04.246994 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:39:04.247039 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:39:04.249547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:39:04.249592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:39:04.251530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:39:04.251573 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:39:04.261834 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:39:04.262803 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:39:04.262861 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:39:04.264815 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 17:39:04.264860 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:39:04.266745 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:39:04.266789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:39:04.268838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:39:04.268883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:39:04.270968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:39:04.271041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:39:04.273262 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:39:04.275343 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:39:04.284313 systemd[1]: Switching root. Nov 12 17:39:04.308880 systemd-journald[238]: Journal stopped Nov 12 17:39:05.036993 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Nov 12 17:39:05.037050 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:39:05.037067 kernel: SELinux: policy capability open_perms=1 Nov 12 17:39:05.037077 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:39:05.037813 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:39:05.037832 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:39:05.037842 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:39:05.037860 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:39:05.037869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:39:05.037883 kernel: audit: type=1403 audit(1731433144.507:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:39:05.037894 systemd[1]: Successfully loaded SELinux policy in 33.704ms. Nov 12 17:39:05.037914 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.346ms. Nov 12 17:39:05.037925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:39:05.037937 systemd[1]: Detected virtualization kvm. Nov 12 17:39:05.037947 systemd[1]: Detected architecture arm64. Nov 12 17:39:05.037963 systemd[1]: Detected first boot. Nov 12 17:39:05.037973 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:39:05.037986 zram_generator::config[1067]: No configuration found. Nov 12 17:39:05.037997 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:39:05.038007 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:39:05.038018 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 17:39:05.038032 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:39:05.038043 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:39:05.038054 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:39:05.038064 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:39:05.038075 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:39:05.038088 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:39:05.038099 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:39:05.038109 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:39:05.038120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:39:05.038130 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:39:05.038141 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:39:05.038151 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:39:05.038162 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:39:05.038174 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:39:05.038185 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 17:39:05.038196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:39:05.038206 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:39:05.038216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:39:05.038227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:39:05.038237 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:39:05.038247 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:39:05.038264 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:39:05.038274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:39:05.038285 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:39:05.038295 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:39:05.038306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:39:05.038316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:39:05.038327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:39:05.038337 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:39:05.038348 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:39:05.038358 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:39:05.038370 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:39:05.038381 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:39:05.038391 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:39:05.038402 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:39:05.038412 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:39:05.038422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:39:05.038433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:39:05.038444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:39:05.038456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:39:05.038490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:39:05.038502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:39:05.038516 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:39:05.038526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:39:05.038537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:39:05.038548 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 17:39:05.038558 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 17:39:05.038571 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:39:05.038582 kernel: fuse: init (API version 7.39) Nov 12 17:39:05.038592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:39:05.038602 kernel: loop: module loaded Nov 12 17:39:05.038612 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:39:05.038622 kernel: ACPI: bus type drm_connector registered Nov 12 17:39:05.038632 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:39:05.038642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:39:05.038675 systemd-journald[1145]: Collecting audit messages is disabled. Nov 12 17:39:05.038699 systemd-journald[1145]: Journal started Nov 12 17:39:05.038732 systemd-journald[1145]: Runtime Journal (/run/log/journal/d542115b0ce1487590eb8ce435d1e94e) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:39:05.043140 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:39:05.044341 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:39:05.045501 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:39:05.046782 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:39:05.047828 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:39:05.049070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:39:05.050264 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:39:05.051521 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:39:05.052966 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:39:05.054406 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:39:05.054583 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:39:05.056007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:39:05.056165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:39:05.057526 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:39:05.057680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:39:05.059237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:39:05.059393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:39:05.060984 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:39:05.061140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:39:05.062429 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:39:05.062661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:39:05.064189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:39:05.065651 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:39:05.067387 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:39:05.078629 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:39:05.088782 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:39:05.090818 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:39:05.091900 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:39:05.094901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:39:05.096982 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:39:05.098068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:39:05.099916 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:39:05.101112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:39:05.103918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:39:05.106314 systemd-journald[1145]: Time spent on flushing to /var/log/journal/d542115b0ce1487590eb8ce435d1e94e is 11.425ms for 849 entries. Nov 12 17:39:05.106314 systemd-journald[1145]: System Journal (/var/log/journal/d542115b0ce1487590eb8ce435d1e94e) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:39:05.127281 systemd-journald[1145]: Received client request to flush runtime journal. Nov 12 17:39:05.106920 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:39:05.113342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:39:05.114938 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:39:05.116216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:39:05.117833 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:39:05.121253 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:39:05.134875 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:39:05.136619 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:39:05.138395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:39:05.142135 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Nov 12 17:39:05.142154 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Nov 12 17:39:05.147543 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 17:39:05.149692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:39:05.165953 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:39:05.185367 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:39:05.201899 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:39:05.212353 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Nov 12 17:39:05.212371 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Nov 12 17:39:05.216134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:39:05.504497 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:39:05.521842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:39:05.540694 systemd-udevd[1226]: Using default interface naming scheme 'v255'. Nov 12 17:39:05.553436 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:39:05.565790 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:39:05.581949 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1232) Nov 12 17:39:05.582043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1237) Nov 12 17:39:05.582857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:39:05.586845 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1232) Nov 12 17:39:05.590266 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Nov 12 17:39:05.629990 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:39:05.634951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:39:05.680533 systemd-networkd[1234]: lo: Link UP Nov 12 17:39:05.680539 systemd-networkd[1234]: lo: Gained carrier Nov 12 17:39:05.681287 systemd-networkd[1234]: Enumeration completed Nov 12 17:39:05.681393 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:39:05.684087 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:39:05.684098 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:39:05.684685 systemd-networkd[1234]: eth0: Link UP Nov 12 17:39:05.684697 systemd-networkd[1234]: eth0: Gained carrier Nov 12 17:39:05.684719 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:39:05.689870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:39:05.698180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:39:05.709683 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:39:05.710973 systemd-networkd[1234]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:39:05.731134 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:39:05.737588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:39:05.739888 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:39:05.778054 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:39:05.779531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:39:05.791914 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:39:05.795155 lvm[1272]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:39:05.826979 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:39:05.828381 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:39:05.829632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:39:05.829661 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:39:05.830690 systemd[1]: Reached target machines.target - Containers. Nov 12 17:39:05.832602 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:39:05.843840 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:39:05.845919 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:39:05.847051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:39:05.847932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:39:05.851651 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:39:05.854918 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:39:05.858500 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:39:05.862403 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:39:05.870763 kernel: loop0: detected capacity change from 0 to 114328 Nov 12 17:39:05.872177 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:39:05.872841 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:39:05.880557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:39:05.912740 kernel: loop1: detected capacity change from 0 to 114432 Nov 12 17:39:05.945731 kernel: loop2: detected capacity change from 0 to 194512 Nov 12 17:39:05.979738 kernel: loop3: detected capacity change from 0 to 114328 Nov 12 17:39:05.984735 kernel: loop4: detected capacity change from 0 to 114432 Nov 12 17:39:05.990731 kernel: loop5: detected capacity change from 0 to 194512 Nov 12 17:39:05.996560 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 17:39:05.997092 (sd-merge)[1293]: Merged extensions into '/usr'. Nov 12 17:39:06.000787 systemd[1]: Reloading requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:39:06.000804 systemd[1]: Reloading... Nov 12 17:39:06.047733 zram_generator::config[1321]: No configuration found. Nov 12 17:39:06.087937 ldconfig[1276]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:39:06.138170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:39:06.181050 systemd[1]: Reloading finished in 179 ms. Nov 12 17:39:06.196443 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:39:06.198080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:39:06.212846 systemd[1]: Starting ensure-sysext.service... Nov 12 17:39:06.215089 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:39:06.218203 systemd[1]: Reloading requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:39:06.218281 systemd[1]: Reloading... Nov 12 17:39:06.231076 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:39:06.231347 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:39:06.232015 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:39:06.232244 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Nov 12 17:39:06.232293 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Nov 12 17:39:06.234661 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:39:06.234670 systemd-tmpfiles[1363]: Skipping /boot Nov 12 17:39:06.241736 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:39:06.241751 systemd-tmpfiles[1363]: Skipping /boot Nov 12 17:39:06.260735 zram_generator::config[1391]: No configuration found. Nov 12 17:39:06.350062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:39:06.392328 systemd[1]: Reloading finished in 173 ms. Nov 12 17:39:06.407384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:39:06.423264 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:39:06.425767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:39:06.427988 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:39:06.430872 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:39:06.435880 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:39:06.443590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:39:06.446520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:39:06.449658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:39:06.452007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:39:06.455738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:39:06.456589 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:39:06.458301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:39:06.458506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:39:06.471377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:39:06.473970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:39:06.475923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:39:06.478930 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:39:06.484563 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:39:06.486527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:39:06.486682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:39:06.489507 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:39:06.489652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:39:06.491410 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:39:06.494213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:39:06.494686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:39:06.496411 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:39:06.503070 augenrules[1469]: No rules Nov 12 17:39:06.505438 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:39:06.508021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:39:06.516953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:39:06.519120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:39:06.524011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:39:06.524542 systemd-resolved[1438]: Positive Trust Anchors: Nov 12 17:39:06.524553 systemd-resolved[1438]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:39:06.524589 systemd-resolved[1438]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:39:06.527932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:39:06.529144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:39:06.529286 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:39:06.530176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:39:06.530325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:39:06.530693 systemd-resolved[1438]: Defaulting to hostname 'linux'. Nov 12 17:39:06.531963 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:39:06.532115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:39:06.533489 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:39:06.534990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:39:06.535133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:39:06.536894 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:39:06.537087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:39:06.540626 systemd[1]: Finished ensure-sysext.service. Nov 12 17:39:06.544043 systemd[1]: Reached target network.target - Network. Nov 12 17:39:06.545194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:39:06.546362 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:39:06.546431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:39:06.558935 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 17:39:06.605478 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 17:39:06.606219 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 17:39:06.606271 systemd-timesyncd[1498]: Initial clock synchronization to Tue 2024-11-12 17:39:06.604195 UTC. Nov 12 17:39:06.607096 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:39:06.608233 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:39:06.609458 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:39:06.610732 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:39:06.611954 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:39:06.611995 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:39:06.612893 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:39:06.614025 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:39:06.615180 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:39:06.616408 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:39:06.618075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:39:06.620547 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:39:06.622777 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:39:06.630757 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:39:06.631815 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:39:06.632765 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:39:06.633835 systemd[1]: System is tainted: cgroupsv1 Nov 12 17:39:06.633884 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:39:06.633904 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:39:06.635032 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:39:06.637106 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:39:06.638996 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:39:06.645107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:39:06.646139 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:39:06.647238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:39:06.652067 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:39:06.656539 jq[1504]: false Nov 12 17:39:06.656834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:39:06.660947 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:39:06.667904 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:39:06.672999 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:39:06.673896 extend-filesystems[1506]: Found loop3 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found loop4 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found loop5 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda1 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda2 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda3 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found usr Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda4 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda6 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda7 Nov 12 17:39:06.674935 extend-filesystems[1506]: Found vda9 Nov 12 17:39:06.674935 extend-filesystems[1506]: Checking size of /dev/vda9 Nov 12 17:39:06.682933 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:39:06.680621 dbus-daemon[1503]: [system] SELinux support is enabled Nov 12 17:39:06.688303 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:39:06.692298 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:39:06.696512 extend-filesystems[1506]: Resized partition /dev/vda9 Nov 12 17:39:06.697343 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:39:06.697674 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:39:06.698232 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:39:06.698431 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:39:06.703315 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:39:06.703538 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:39:06.704485 extend-filesystems[1532]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:39:06.715872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1238) Nov 12 17:39:06.716214 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:39:06.719414 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 17:39:06.724481 jq[1528]: true Nov 12 17:39:06.741326 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:39:06.741375 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:39:06.746990 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:39:06.747019 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:39:06.748767 update_engine[1522]: I20241112 17:39:06.748174 1522 main.cc:92] Flatcar Update Engine starting Nov 12 17:39:06.757649 jq[1544]: true Nov 12 17:39:06.758155 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:39:06.758749 update_engine[1522]: I20241112 17:39:06.758059 1522 update_check_scheduler.cc:74] Next update check in 3m8s Nov 12 17:39:06.761549 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:39:06.764146 tar[1533]: linux-arm64/helm Nov 12 17:39:06.767073 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 17:39:06.767084 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:39:06.777857 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:39:06.778592 extend-filesystems[1532]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 17:39:06.778592 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:39:06.778592 extend-filesystems[1532]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 17:39:06.798181 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Nov 12 17:39:06.783843 systemd-logind[1515]: New seat seat0. Nov 12 17:39:06.784123 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:39:06.784368 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:39:06.788888 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:39:06.822144 bash[1572]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:39:06.824364 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:39:06.826977 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 17:39:06.832022 locksmithd[1550]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:39:06.914770 containerd[1535]: time="2024-11-12T17:39:06.914499480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:39:06.941644 containerd[1535]: time="2024-11-12T17:39:06.940879320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.943302 containerd[1535]: time="2024-11-12T17:39:06.943266040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:39:06.943387 containerd[1535]: time="2024-11-12T17:39:06.943372240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:39:06.943514 containerd[1535]: time="2024-11-12T17:39:06.943495760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:39:06.943796 containerd[1535]: time="2024-11-12T17:39:06.943774880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:39:06.943879 containerd[1535]: time="2024-11-12T17:39:06.943865240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944057 containerd[1535]: time="2024-11-12T17:39:06.944036280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944119 containerd[1535]: time="2024-11-12T17:39:06.944105840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944495 containerd[1535]: time="2024-11-12T17:39:06.944471000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944563 containerd[1535]: time="2024-11-12T17:39:06.944550720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944615 containerd[1535]: time="2024-11-12T17:39:06.944601960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944736 containerd[1535]: time="2024-11-12T17:39:06.944718640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.944927 containerd[1535]: time="2024-11-12T17:39:06.944909240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.945207 containerd[1535]: time="2024-11-12T17:39:06.945185680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:39:06.945700 containerd[1535]: time="2024-11-12T17:39:06.945473320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:39:06.945700 containerd[1535]: time="2024-11-12T17:39:06.945546920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:39:06.945700 containerd[1535]: time="2024-11-12T17:39:06.945627640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:39:06.945700 containerd[1535]: time="2024-11-12T17:39:06.945667120Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:39:06.949256 containerd[1535]: time="2024-11-12T17:39:06.949231240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:39:06.949416 containerd[1535]: time="2024-11-12T17:39:06.949397880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:39:06.949516 containerd[1535]: time="2024-11-12T17:39:06.949481320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:39:06.949853 containerd[1535]: time="2024-11-12T17:39:06.949592960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:39:06.949853 containerd[1535]: time="2024-11-12T17:39:06.949616000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:39:06.949853 containerd[1535]: time="2024-11-12T17:39:06.949758560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:39:06.950325 containerd[1535]: time="2024-11-12T17:39:06.950299480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:39:06.950455 containerd[1535]: time="2024-11-12T17:39:06.950435200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:39:06.950496 containerd[1535]: time="2024-11-12T17:39:06.950457600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:39:06.950496 containerd[1535]: time="2024-11-12T17:39:06.950484440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:39:06.950547 containerd[1535]: time="2024-11-12T17:39:06.950500080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950547 containerd[1535]: time="2024-11-12T17:39:06.950513920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950547 containerd[1535]: time="2024-11-12T17:39:06.950525880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950547 containerd[1535]: time="2024-11-12T17:39:06.950539640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950626 containerd[1535]: time="2024-11-12T17:39:06.950553560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950626 containerd[1535]: time="2024-11-12T17:39:06.950566640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950626 containerd[1535]: time="2024-11-12T17:39:06.950578240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950626 containerd[1535]: time="2024-11-12T17:39:06.950589960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:39:06.950626 containerd[1535]: time="2024-11-12T17:39:06.950614440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950627720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950639120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950650360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950661720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950674360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950685680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950725 containerd[1535]: time="2024-11-12T17:39:06.950698080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950737040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950752320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950764360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950775840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950787440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950802840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950822640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950834960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.950849 containerd[1535]: time="2024-11-12T17:39:06.950845640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:39:06.951002 containerd[1535]: time="2024-11-12T17:39:06.950958320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:39:06.951002 containerd[1535]: time="2024-11-12T17:39:06.950975080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:39:06.951002 containerd[1535]: time="2024-11-12T17:39:06.950985240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:39:06.951002 containerd[1535]: time="2024-11-12T17:39:06.950997560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:39:06.951074 containerd[1535]: time="2024-11-12T17:39:06.951006960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.951074 containerd[1535]: time="2024-11-12T17:39:06.951019640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:39:06.951074 containerd[1535]: time="2024-11-12T17:39:06.951029200Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:39:06.951074 containerd[1535]: time="2024-11-12T17:39:06.951039040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:39:06.951344 containerd[1535]: time="2024-11-12T17:39:06.951287520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:39:06.951459 containerd[1535]: time="2024-11-12T17:39:06.951348240Z" level=info msg="Connect containerd service" Nov 12 17:39:06.951459 containerd[1535]: time="2024-11-12T17:39:06.951372240Z" level=info msg="using legacy CRI server" Nov 12 17:39:06.951459 containerd[1535]: time="2024-11-12T17:39:06.951378680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:39:06.951459 containerd[1535]: time="2024-11-12T17:39:06.951454600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:39:06.952042 containerd[1535]: time="2024-11-12T17:39:06.952015800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:39:06.952392 containerd[1535]: time="2024-11-12T17:39:06.952335800Z" level=info msg="Start subscribing containerd event" Nov 12 17:39:06.952544 containerd[1535]: time="2024-11-12T17:39:06.952488160Z" level=info msg="Start recovering state" Nov 12 17:39:06.952688 containerd[1535]: time="2024-11-12T17:39:06.952613760Z" level=info msg="Start event monitor" Nov 12 17:39:06.952688 containerd[1535]: time="2024-11-12T17:39:06.952630480Z" level=info msg="Start snapshots syncer" Nov 12 17:39:06.952914 containerd[1535]: time="2024-11-12T17:39:06.952488000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:39:06.952914 containerd[1535]: time="2024-11-12T17:39:06.952883480Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:39:06.953048 containerd[1535]: time="2024-11-12T17:39:06.953031720Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:39:06.953097 containerd[1535]: time="2024-11-12T17:39:06.953087200Z" level=info msg="Start streaming server" Nov 12 17:39:06.953938 containerd[1535]: time="2024-11-12T17:39:06.953459200Z" level=info msg="containerd successfully booted in 0.040326s" Nov 12 17:39:06.953586 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:39:07.120554 tar[1533]: linux-arm64/LICENSE Nov 12 17:39:07.120654 tar[1533]: linux-arm64/README.md Nov 12 17:39:07.131021 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:39:07.520922 systemd-networkd[1234]: eth0: Gained IPv6LL Nov 12 17:39:07.523293 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:39:07.525218 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:39:07.534171 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 17:39:07.536632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:07.538901 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:39:07.559116 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:39:07.564044 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 17:39:07.564265 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 17:39:07.565677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:39:07.621135 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:39:07.642623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:39:07.654045 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:39:07.659343 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:39:07.659570 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:39:07.662733 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:39:07.676755 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:39:07.686028 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:39:07.688141 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 17:39:07.689514 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:39:08.014227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:08.015778 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:39:08.018135 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:39:08.021793 systemd[1]: Startup finished in 5.367s (kernel) + 3.549s (userspace) = 8.917s. Nov 12 17:39:08.511588 kubelet[1639]: E1112 17:39:08.511431 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:39:08.514076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:39:08.514249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:39:12.533174 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:39:12.541999 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:43212.service - OpenSSH per-connection server daemon (10.0.0.1:43212). Nov 12 17:39:12.590087 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 43212 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:12.593593 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:12.608514 systemd-logind[1515]: New session 1 of user core. Nov 12 17:39:12.609333 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:39:12.619018 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:39:12.628762 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:39:12.631334 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:39:12.638270 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:39:12.710690 systemd[1659]: Queued start job for default target default.target. Nov 12 17:39:12.711086 systemd[1659]: Created slice app.slice - User Application Slice. Nov 12 17:39:12.711108 systemd[1659]: Reached target paths.target - Paths. Nov 12 17:39:12.711121 systemd[1659]: Reached target timers.target - Timers. Nov 12 17:39:12.729881 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:39:12.736812 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:39:12.736887 systemd[1659]: Reached target sockets.target - Sockets. Nov 12 17:39:12.736899 systemd[1659]: Reached target basic.target - Basic System. Nov 12 17:39:12.736941 systemd[1659]: Reached target default.target - Main User Target. Nov 12 17:39:12.736968 systemd[1659]: Startup finished in 93ms. Nov 12 17:39:12.737110 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:39:12.738325 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:39:12.793951 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:43218.service - OpenSSH per-connection server daemon (10.0.0.1:43218). Nov 12 17:39:12.833280 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 43218 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:12.834604 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:12.839759 systemd-logind[1515]: New session 2 of user core. Nov 12 17:39:12.846979 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:39:12.899386 sshd[1671]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:12.913069 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:43222.service - OpenSSH per-connection server daemon (10.0.0.1:43222). Nov 12 17:39:12.913492 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:43218.service: Deactivated successfully. Nov 12 17:39:12.915212 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:39:12.915837 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:39:12.917089 systemd-logind[1515]: Removed session 2. Nov 12 17:39:12.947646 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 43222 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:12.948953 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:12.952757 systemd-logind[1515]: New session 3 of user core. Nov 12 17:39:12.967045 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:39:13.014866 sshd[1676]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:13.023984 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:43234.service - OpenSSH per-connection server daemon (10.0.0.1:43234). Nov 12 17:39:13.024400 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:43222.service: Deactivated successfully. Nov 12 17:39:13.026987 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:39:13.027054 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:39:13.028344 systemd-logind[1515]: Removed session 3. Nov 12 17:39:13.058014 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 43234 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:13.059145 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:13.063585 systemd-logind[1515]: New session 4 of user core. Nov 12 17:39:13.075977 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:39:13.128544 sshd[1684]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:13.136942 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:43248.service - OpenSSH per-connection server daemon (10.0.0.1:43248). Nov 12 17:39:13.137302 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:43234.service: Deactivated successfully. Nov 12 17:39:13.139628 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:39:13.140204 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:39:13.141161 systemd-logind[1515]: Removed session 4. Nov 12 17:39:13.171064 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 43248 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:13.172265 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:13.176747 systemd-logind[1515]: New session 5 of user core. Nov 12 17:39:13.186966 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:39:13.244401 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:39:13.244668 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:39:13.266661 sudo[1699]: pam_unix(sudo:session): session closed for user root Nov 12 17:39:13.268308 sshd[1692]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:13.280944 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:43250.service - OpenSSH per-connection server daemon (10.0.0.1:43250). Nov 12 17:39:13.281303 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:43248.service: Deactivated successfully. Nov 12 17:39:13.283633 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:39:13.284237 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:39:13.285163 systemd-logind[1515]: Removed session 5. Nov 12 17:39:13.315155 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 43250 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:13.316935 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:13.320916 systemd-logind[1515]: New session 6 of user core. Nov 12 17:39:13.331962 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:39:13.383511 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:39:13.384149 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:39:13.387417 sudo[1709]: pam_unix(sudo:session): session closed for user root Nov 12 17:39:13.392423 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:39:13.392700 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:39:13.417990 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:39:13.419415 auditctl[1712]: No rules Nov 12 17:39:13.420325 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:39:13.420594 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:39:13.422414 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:39:13.447222 augenrules[1731]: No rules Nov 12 17:39:13.448553 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:39:13.450183 sudo[1708]: pam_unix(sudo:session): session closed for user root Nov 12 17:39:13.452587 sshd[1701]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:13.468934 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:43262.service - OpenSSH per-connection server daemon (10.0.0.1:43262). Nov 12 17:39:13.469294 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:43250.service: Deactivated successfully. Nov 12 17:39:13.470946 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:39:13.471575 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:39:13.473012 systemd-logind[1515]: Removed session 6. Nov 12 17:39:13.505235 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 43262 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:13.506416 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:13.510356 systemd-logind[1515]: New session 7 of user core. Nov 12 17:39:13.519958 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:39:13.570044 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:39:13.570336 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:39:13.872941 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:39:13.873149 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:39:14.136522 dockerd[1763]: time="2024-11-12T17:39:14.136388614Z" level=info msg="Starting up" Nov 12 17:39:14.378643 dockerd[1763]: time="2024-11-12T17:39:14.378592867Z" level=info msg="Loading containers: start." Nov 12 17:39:14.461738 kernel: Initializing XFRM netlink socket Nov 12 17:39:14.527198 systemd-networkd[1234]: docker0: Link UP Nov 12 17:39:14.554178 dockerd[1763]: time="2024-11-12T17:39:14.554112420Z" level=info msg="Loading containers: done." Nov 12 17:39:14.565552 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1364004445-merged.mount: Deactivated successfully. Nov 12 17:39:14.566519 dockerd[1763]: time="2024-11-12T17:39:14.566480827Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:39:14.566598 dockerd[1763]: time="2024-11-12T17:39:14.566574220Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:39:14.566774 dockerd[1763]: time="2024-11-12T17:39:14.566671893Z" level=info msg="Daemon has completed initialization" Nov 12 17:39:14.595725 dockerd[1763]: time="2024-11-12T17:39:14.595311765Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:39:14.595570 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:39:15.235101 containerd[1535]: time="2024-11-12T17:39:15.235049571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 17:39:16.000507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786132902.mount: Deactivated successfully. Nov 12 17:39:17.290042 containerd[1535]: time="2024-11-12T17:39:17.289987227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:17.290571 containerd[1535]: time="2024-11-12T17:39:17.290528592Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201617" Nov 12 17:39:17.291691 containerd[1535]: time="2024-11-12T17:39:17.291658160Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:17.294658 containerd[1535]: time="2024-11-12T17:39:17.294592014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:17.295861 containerd[1535]: time="2024-11-12T17:39:17.295829496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 2.060735007s" Nov 12 17:39:17.296113 containerd[1535]: time="2024-11-12T17:39:17.295968047Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 17:39:17.316933 containerd[1535]: time="2024-11-12T17:39:17.316843201Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 17:39:18.764527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:39:18.772897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:18.862893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:18.866810 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:39:18.991805 containerd[1535]: time="2024-11-12T17:39:18.991754524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:18.993196 containerd[1535]: time="2024-11-12T17:39:18.992879217Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381046" Nov 12 17:39:18.993748 containerd[1535]: time="2024-11-12T17:39:18.993718487Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:18.999726 containerd[1535]: time="2024-11-12T17:39:18.999007853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:19.003220 containerd[1535]: time="2024-11-12T17:39:19.003174210Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 1.686295171s" Nov 12 17:39:19.003220 containerd[1535]: time="2024-11-12T17:39:19.003208568Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 17:39:19.017196 kubelet[1994]: E1112 17:39:19.017090 1994 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:39:19.021669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:39:19.021831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:39:19.022294 containerd[1535]: time="2024-11-12T17:39:19.022264145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 17:39:20.225169 containerd[1535]: time="2024-11-12T17:39:20.224947480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:20.226096 containerd[1535]: time="2024-11-12T17:39:20.225866832Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770290" Nov 12 17:39:20.226809 containerd[1535]: time="2024-11-12T17:39:20.226768585Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:20.229968 containerd[1535]: time="2024-11-12T17:39:20.229914620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:20.231116 containerd[1535]: time="2024-11-12T17:39:20.231078079Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 1.208776897s" Nov 12 17:39:20.231166 containerd[1535]: time="2024-11-12T17:39:20.231115237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 17:39:20.250079 containerd[1535]: time="2024-11-12T17:39:20.249901014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 17:39:21.207982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609894756.mount: Deactivated successfully. Nov 12 17:39:21.531753 containerd[1535]: time="2024-11-12T17:39:21.531603325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:21.532395 containerd[1535]: time="2024-11-12T17:39:21.532369047Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272231" Nov 12 17:39:21.533016 containerd[1535]: time="2024-11-12T17:39:21.532996576Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:21.535336 containerd[1535]: time="2024-11-12T17:39:21.535297063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:21.535793 containerd[1535]: time="2024-11-12T17:39:21.535761721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 1.285822908s" Nov 12 17:39:21.535793 containerd[1535]: time="2024-11-12T17:39:21.535791199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 17:39:21.554412 containerd[1535]: time="2024-11-12T17:39:21.554362128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:39:22.226795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697812013.mount: Deactivated successfully. Nov 12 17:39:22.827916 containerd[1535]: time="2024-11-12T17:39:22.827773082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:22.828811 containerd[1535]: time="2024-11-12T17:39:22.828523608Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 17:39:22.829639 containerd[1535]: time="2024-11-12T17:39:22.829576759Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:22.832559 containerd[1535]: time="2024-11-12T17:39:22.832531183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:22.833783 containerd[1535]: time="2024-11-12T17:39:22.833686210Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.279286404s" Nov 12 17:39:22.833783 containerd[1535]: time="2024-11-12T17:39:22.833730128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:39:22.850877 containerd[1535]: time="2024-11-12T17:39:22.850843501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:39:23.253450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101919389.mount: Deactivated successfully. Nov 12 17:39:23.257990 containerd[1535]: time="2024-11-12T17:39:23.257949156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:23.258620 containerd[1535]: time="2024-11-12T17:39:23.258420376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Nov 12 17:39:23.259361 containerd[1535]: time="2024-11-12T17:39:23.259323697Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:23.261525 containerd[1535]: time="2024-11-12T17:39:23.261497643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:23.262631 containerd[1535]: time="2024-11-12T17:39:23.262604036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 411.723776ms" Nov 12 17:39:23.263126 containerd[1535]: time="2024-11-12T17:39:23.263041617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:39:23.281081 containerd[1535]: time="2024-11-12T17:39:23.281053200Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 17:39:23.785980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314402629.mount: Deactivated successfully. Nov 12 17:39:25.723299 containerd[1535]: time="2024-11-12T17:39:25.723249455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:25.723831 containerd[1535]: time="2024-11-12T17:39:25.723792634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Nov 12 17:39:25.724747 containerd[1535]: time="2024-11-12T17:39:25.724720119Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:25.727837 containerd[1535]: time="2024-11-12T17:39:25.727800883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:25.729896 containerd[1535]: time="2024-11-12T17:39:25.729766968Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.448679289s" Nov 12 17:39:25.729896 containerd[1535]: time="2024-11-12T17:39:25.729802647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 17:39:29.187844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:39:29.197908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:29.365268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:29.370033 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:39:29.413859 kubelet[2229]: E1112 17:39:29.413810 2229 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:39:29.416167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:39:29.416309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:39:30.285071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:30.296957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:30.316039 systemd[1]: Reloading requested from client PID 2246 ('systemctl') (unit session-7.scope)... Nov 12 17:39:30.316057 systemd[1]: Reloading... Nov 12 17:39:30.369766 zram_generator::config[2285]: No configuration found. Nov 12 17:39:30.594399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:39:30.643589 systemd[1]: Reloading finished in 327 ms. Nov 12 17:39:30.690111 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 17:39:30.690170 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 17:39:30.690414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:30.693050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:30.791420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:30.796765 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:39:30.843312 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:39:30.843312 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:39:30.843312 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:39:30.844112 kubelet[2343]: I1112 17:39:30.844057 2343 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:39:31.873837 kubelet[2343]: I1112 17:39:31.873784 2343 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:39:31.873837 kubelet[2343]: I1112 17:39:31.873817 2343 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:39:31.874204 kubelet[2343]: I1112 17:39:31.874037 2343 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:39:31.895421 kubelet[2343]: I1112 17:39:31.895303 2343 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:39:31.895421 kubelet[2343]: E1112 17:39:31.895407 2343 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.905804 kubelet[2343]: I1112 17:39:31.905772 2343 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:39:31.906751 kubelet[2343]: I1112 17:39:31.906392 2343 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:39:31.906751 kubelet[2343]: I1112 17:39:31.906583 2343 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:39:31.906751 kubelet[2343]: I1112 17:39:31.906596 2343 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:39:31.906751 kubelet[2343]: I1112 17:39:31.906604 2343 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:39:31.909226 kubelet[2343]: I1112 17:39:31.909185 2343 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:39:31.915363 kubelet[2343]: I1112 17:39:31.915335 2343 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:39:31.915363 kubelet[2343]: I1112 17:39:31.915366 2343 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:39:31.915432 kubelet[2343]: I1112 17:39:31.915391 2343 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:39:31.915432 kubelet[2343]: I1112 17:39:31.915407 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:39:31.916390 kubelet[2343]: W1112 17:39:31.916148 2343 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.916390 kubelet[2343]: E1112 17:39:31.916201 2343 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.919882 kubelet[2343]: W1112 17:39:31.919758 2343 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.919882 kubelet[2343]: E1112 17:39:31.919817 2343 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.920011 kubelet[2343]: I1112 17:39:31.919954 2343 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:39:31.922732 kubelet[2343]: I1112 17:39:31.922558 2343 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:39:31.924585 kubelet[2343]: W1112 17:39:31.924556 2343 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:39:31.925570 kubelet[2343]: I1112 17:39:31.925467 2343 server.go:1256] "Started kubelet" Nov 12 17:39:31.925636 kubelet[2343]: I1112 17:39:31.925590 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:39:31.927827 kubelet[2343]: I1112 17:39:31.925902 2343 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:39:31.927827 kubelet[2343]: I1112 17:39:31.925968 2343 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:39:31.927827 kubelet[2343]: I1112 17:39:31.926753 2343 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:39:31.927827 kubelet[2343]: I1112 17:39:31.926962 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:39:31.930064 kubelet[2343]: I1112 17:39:31.929190 2343 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:39:31.930064 kubelet[2343]: I1112 17:39:31.929297 2343 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:39:31.930064 kubelet[2343]: I1112 17:39:31.929368 2343 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:39:31.930064 kubelet[2343]: W1112 17:39:31.929655 2343 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.930064 kubelet[2343]: E1112 17:39:31.929696 2343 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.930456 kubelet[2343]: E1112 17:39:31.930431 2343 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:39:31.930659 kubelet[2343]: I1112 17:39:31.930636 2343 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:39:31.930745 kubelet[2343]: E1112 17:39:31.930729 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Nov 12 17:39:31.930872 kubelet[2343]: I1112 17:39:31.930759 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:39:31.931114 kubelet[2343]: E1112 17:39:31.931090 2343 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807494bdcf19762 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 17:39:31.925440354 +0000 UTC m=+1.125229150,LastTimestamp:2024-11-12 17:39:31.925440354 +0000 UTC m=+1.125229150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 17:39:31.932158 kubelet[2343]: I1112 17:39:31.932129 2343 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:39:31.940534 kubelet[2343]: I1112 17:39:31.940490 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:39:31.941590 kubelet[2343]: I1112 17:39:31.941563 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:39:31.941590 kubelet[2343]: I1112 17:39:31.941591 2343 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:39:31.941660 kubelet[2343]: I1112 17:39:31.941609 2343 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:39:31.941683 kubelet[2343]: E1112 17:39:31.941670 2343 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:39:31.947555 kubelet[2343]: W1112 17:39:31.947420 2343 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.947555 kubelet[2343]: E1112 17:39:31.947459 2343 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:31.950879 kubelet[2343]: I1112 17:39:31.950852 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:39:31.950879 kubelet[2343]: I1112 17:39:31.950879 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:39:31.950977 kubelet[2343]: I1112 17:39:31.950898 2343 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:39:31.971502 kubelet[2343]: I1112 17:39:31.971461 2343 policy_none.go:49] "None policy: Start" Nov 12 17:39:31.972482 kubelet[2343]: I1112 17:39:31.972462 2343 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:39:31.972881 kubelet[2343]: I1112 17:39:31.972677 2343 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:39:31.979177 kubelet[2343]: I1112 17:39:31.979144 2343 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:39:31.979421 kubelet[2343]: I1112 17:39:31.979405 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:39:31.981393 kubelet[2343]: E1112 17:39:31.981370 2343 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 17:39:32.030658 kubelet[2343]: I1112 17:39:32.030618 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:39:32.031110 kubelet[2343]: E1112 17:39:32.031066 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Nov 12 17:39:32.042463 kubelet[2343]: I1112 17:39:32.042397 2343 topology_manager.go:215] "Topology Admit Handler" podUID="178808dbae0e3eeb6293fa016130b529" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:39:32.047600 kubelet[2343]: I1112 17:39:32.043519 2343 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:39:32.054026 kubelet[2343]: I1112 17:39:32.053170 2343 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:39:32.129979 kubelet[2343]: I1112 17:39:32.129862 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:32.129979 kubelet[2343]: I1112 17:39:32.129908 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:32.129979 kubelet[2343]: I1112 17:39:32.129932 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/178808dbae0e3eeb6293fa016130b529-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"178808dbae0e3eeb6293fa016130b529\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:32.129979 kubelet[2343]: I1112 17:39:32.129951 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/178808dbae0e3eeb6293fa016130b529-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"178808dbae0e3eeb6293fa016130b529\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:32.129979 kubelet[2343]: I1112 17:39:32.129981 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/178808dbae0e3eeb6293fa016130b529-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"178808dbae0e3eeb6293fa016130b529\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:32.130176 kubelet[2343]: I1112 17:39:32.130002 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:32.130176 kubelet[2343]: I1112 17:39:32.130022 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:32.130176 kubelet[2343]: I1112 17:39:32.130042 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:32.130176 kubelet[2343]: I1112 17:39:32.130061 2343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:39:32.132335 kubelet[2343]: E1112 17:39:32.132298 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Nov 12 17:39:32.233086 kubelet[2343]: I1112 17:39:32.233044 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:39:32.233451 kubelet[2343]: E1112 17:39:32.233423 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Nov 12 17:39:32.357107 kubelet[2343]: E1112 17:39:32.357062 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:32.357804 containerd[1535]: time="2024-11-12T17:39:32.357676063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:178808dbae0e3eeb6293fa016130b529,Namespace:kube-system,Attempt:0,}" Nov 12 17:39:32.365082 kubelet[2343]: E1112 17:39:32.364951 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:32.365082 kubelet[2343]: E1112 17:39:32.365003 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:32.365476 containerd[1535]: time="2024-11-12T17:39:32.365350718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 17:39:32.365506 containerd[1535]: time="2024-11-12T17:39:32.365358957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 17:39:32.533099 kubelet[2343]: E1112 17:39:32.533064 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Nov 12 17:39:32.634676 kubelet[2343]: I1112 17:39:32.634651 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:39:32.635148 kubelet[2343]: E1112 17:39:32.635129 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Nov 12 17:39:32.875421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333962110.mount: Deactivated successfully. Nov 12 17:39:32.880787 containerd[1535]: time="2024-11-12T17:39:32.880744087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:39:32.881571 containerd[1535]: time="2024-11-12T17:39:32.881534228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:39:32.882300 containerd[1535]: time="2024-11-12T17:39:32.882242731Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:39:32.883851 containerd[1535]: time="2024-11-12T17:39:32.883794534Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:39:32.884188 containerd[1535]: time="2024-11-12T17:39:32.883974530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 17:39:32.884548 containerd[1535]: time="2024-11-12T17:39:32.884520476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:39:32.884626 containerd[1535]: time="2024-11-12T17:39:32.884601834Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:39:32.887659 containerd[1535]: time="2024-11-12T17:39:32.887628881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:39:32.889214 containerd[1535]: time="2024-11-12T17:39:32.889180444Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.751568ms" Nov 12 17:39:32.889900 containerd[1535]: time="2024-11-12T17:39:32.889874667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.098486ms" Nov 12 17:39:32.893005 containerd[1535]: time="2024-11-12T17:39:32.892903914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.363601ms" Nov 12 17:39:33.055769 containerd[1535]: time="2024-11-12T17:39:33.053671997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:39:33.055769 containerd[1535]: time="2024-11-12T17:39:33.053765235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:39:33.055769 containerd[1535]: time="2024-11-12T17:39:33.053782074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:33.055769 containerd[1535]: time="2024-11-12T17:39:33.054517098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:33.056688 containerd[1535]: time="2024-11-12T17:39:33.056606850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:39:33.056688 containerd[1535]: time="2024-11-12T17:39:33.056671369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:39:33.056784 containerd[1535]: time="2024-11-12T17:39:33.056687208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:33.056857 containerd[1535]: time="2024-11-12T17:39:33.056790166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:33.057316 containerd[1535]: time="2024-11-12T17:39:33.057229996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:39:33.057316 containerd[1535]: time="2024-11-12T17:39:33.057296475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:39:33.057316 containerd[1535]: time="2024-11-12T17:39:33.057307514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:33.057529 containerd[1535]: time="2024-11-12T17:39:33.057468191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:33.111387 containerd[1535]: time="2024-11-12T17:39:33.104921718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:178808dbae0e3eeb6293fa016130b529,Namespace:kube-system,Attempt:0,} returns sandbox id \"feded07f623acf76554320b01940629db6910261f83b5944d45bec649c320e30\"" Nov 12 17:39:33.111387 containerd[1535]: time="2024-11-12T17:39:33.107132228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8041fb37397951c753396eaa092048cd718fb4a7cc7bbdef375a475a5edec807\"" Nov 12 17:39:33.111536 kubelet[2343]: E1112 17:39:33.107880 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:33.111536 kubelet[2343]: E1112 17:39:33.108924 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:33.115463 containerd[1535]: time="2024-11-12T17:39:33.112639703Z" level=info msg="CreateContainer within sandbox \"8041fb37397951c753396eaa092048cd718fb4a7cc7bbdef375a475a5edec807\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:39:33.115463 containerd[1535]: time="2024-11-12T17:39:33.113951074Z" level=info msg="CreateContainer within sandbox \"feded07f623acf76554320b01940629db6910261f83b5944d45bec649c320e30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:39:33.120720 containerd[1535]: time="2024-11-12T17:39:33.120632123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f447ba8c8d2fed87dfeef184540ee7e14da87eb6a948eb5f79993a5c5d13885c\"" Nov 12 17:39:33.121906 kubelet[2343]: E1112 17:39:33.121884 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:33.125951 containerd[1535]: time="2024-11-12T17:39:33.125851805Z" level=info msg="CreateContainer within sandbox \"f447ba8c8d2fed87dfeef184540ee7e14da87eb6a948eb5f79993a5c5d13885c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:39:33.136869 containerd[1535]: time="2024-11-12T17:39:33.136763318Z" level=info msg="CreateContainer within sandbox \"feded07f623acf76554320b01940629db6910261f83b5944d45bec649c320e30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"117683fc5c2b6a70b63f3501cc0ae3e10cba6fef41d4704ee569b186711f20c6\"" Nov 12 17:39:33.137481 containerd[1535]: time="2024-11-12T17:39:33.137452382Z" level=info msg="StartContainer for \"117683fc5c2b6a70b63f3501cc0ae3e10cba6fef41d4704ee569b186711f20c6\"" Nov 12 17:39:33.139843 containerd[1535]: time="2024-11-12T17:39:33.139796889Z" level=info msg="CreateContainer within sandbox \"f447ba8c8d2fed87dfeef184540ee7e14da87eb6a948eb5f79993a5c5d13885c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b3c4aac48aea2c708124b2bd197e732ff230ef13415aeff262e7fea618ab7104\"" Nov 12 17:39:33.140164 containerd[1535]: time="2024-11-12T17:39:33.140136842Z" level=info msg="StartContainer for \"b3c4aac48aea2c708124b2bd197e732ff230ef13415aeff262e7fea618ab7104\"" Nov 12 17:39:33.141729 containerd[1535]: time="2024-11-12T17:39:33.141505371Z" level=info msg="CreateContainer within sandbox \"8041fb37397951c753396eaa092048cd718fb4a7cc7bbdef375a475a5edec807\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54c32361b00e7e486f0cb80b04ebf92e873f3b28d8e55b7ac42d35f045119ee1\"" Nov 12 17:39:33.143813 containerd[1535]: time="2024-11-12T17:39:33.143772319Z" level=info msg="StartContainer for \"54c32361b00e7e486f0cb80b04ebf92e873f3b28d8e55b7ac42d35f045119ee1\"" Nov 12 17:39:33.217605 containerd[1535]: time="2024-11-12T17:39:33.216663551Z" level=info msg="StartContainer for \"117683fc5c2b6a70b63f3501cc0ae3e10cba6fef41d4704ee569b186711f20c6\" returns successfully" Nov 12 17:39:33.217605 containerd[1535]: time="2024-11-12T17:39:33.216845747Z" level=info msg="StartContainer for \"54c32361b00e7e486f0cb80b04ebf92e873f3b28d8e55b7ac42d35f045119ee1\" returns successfully" Nov 12 17:39:33.217605 containerd[1535]: time="2024-11-12T17:39:33.216880986Z" level=info msg="StartContainer for \"b3c4aac48aea2c708124b2bd197e732ff230ef13415aeff262e7fea618ab7104\" returns successfully" Nov 12 17:39:33.316846 kubelet[2343]: W1112 17:39:33.316779 2343 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:33.316846 kubelet[2343]: E1112 17:39:33.316849 2343 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 12 17:39:33.335860 kubelet[2343]: E1112 17:39:33.334496 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Nov 12 17:39:33.437679 kubelet[2343]: I1112 17:39:33.437074 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:39:33.960272 kubelet[2343]: E1112 17:39:33.960224 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:33.966300 kubelet[2343]: E1112 17:39:33.966268 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:33.967174 kubelet[2343]: E1112 17:39:33.967149 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:34.648883 kubelet[2343]: I1112 17:39:34.647851 2343 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:39:34.919178 kubelet[2343]: I1112 17:39:34.918033 2343 apiserver.go:52] "Watching apiserver" Nov 12 17:39:34.929657 kubelet[2343]: I1112 17:39:34.929598 2343 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:39:34.973187 kubelet[2343]: E1112 17:39:34.973158 2343 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:34.974002 kubelet[2343]: E1112 17:39:34.973984 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:37.204860 kubelet[2343]: E1112 17:39:37.204830 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:37.443591 systemd[1]: Reloading requested from client PID 2617 ('systemctl') (unit session-7.scope)... Nov 12 17:39:37.443607 systemd[1]: Reloading... Nov 12 17:39:37.448526 kubelet[2343]: E1112 17:39:37.448306 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:37.502762 zram_generator::config[2656]: No configuration found. Nov 12 17:39:37.590359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:39:37.646595 systemd[1]: Reloading finished in 202 ms. Nov 12 17:39:37.673125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:37.681785 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:39:37.682104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:37.694005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:39:37.780775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:39:37.785190 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:39:37.834077 kubelet[2708]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:39:37.834077 kubelet[2708]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:39:37.834077 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:39:37.834077 kubelet[2708]: I1112 17:39:37.833975 2708 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:39:37.839247 kubelet[2708]: I1112 17:39:37.839164 2708 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:39:37.839247 kubelet[2708]: I1112 17:39:37.839197 2708 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:39:37.839419 kubelet[2708]: I1112 17:39:37.839398 2708 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:39:37.841774 kubelet[2708]: I1112 17:39:37.841674 2708 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:39:37.844259 kubelet[2708]: I1112 17:39:37.844194 2708 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:39:37.850791 kubelet[2708]: I1112 17:39:37.850766 2708 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:39:37.851202 kubelet[2708]: I1112 17:39:37.851189 2708 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:39:37.851365 kubelet[2708]: I1112 17:39:37.851350 2708 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:39:37.851442 kubelet[2708]: I1112 17:39:37.851374 2708 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:39:37.851442 kubelet[2708]: I1112 17:39:37.851383 2708 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:39:37.851442 kubelet[2708]: I1112 17:39:37.851419 2708 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:39:37.851520 kubelet[2708]: I1112 17:39:37.851509 2708 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:39:37.851547 kubelet[2708]: I1112 17:39:37.851525 2708 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:39:37.851547 kubelet[2708]: I1112 17:39:37.851545 2708 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:39:37.851588 kubelet[2708]: I1112 17:39:37.851558 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:39:37.852996 sudo[2723]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 17:39:37.853310 sudo[2723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 17:39:37.857443 kubelet[2708]: I1112 17:39:37.857292 2708 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:39:37.858337 kubelet[2708]: I1112 17:39:37.858321 2708 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:39:37.860659 kubelet[2708]: I1112 17:39:37.859650 2708 server.go:1256] "Started kubelet" Nov 12 17:39:37.860659 kubelet[2708]: I1112 17:39:37.860513 2708 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:39:37.861370 kubelet[2708]: I1112 17:39:37.861072 2708 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:39:37.861370 kubelet[2708]: I1112 17:39:37.861284 2708 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:39:37.861447 kubelet[2708]: I1112 17:39:37.861388 2708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:39:37.865364 kubelet[2708]: I1112 17:39:37.865336 2708 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:39:37.875450 kubelet[2708]: I1112 17:39:37.871196 2708 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:39:37.875450 kubelet[2708]: I1112 17:39:37.872399 2708 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:39:37.875450 kubelet[2708]: I1112 17:39:37.872470 2708 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:39:37.875450 kubelet[2708]: I1112 17:39:37.872537 2708 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:39:37.875450 kubelet[2708]: I1112 17:39:37.874660 2708 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:39:37.875450 kubelet[2708]: E1112 17:39:37.874793 2708 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:39:37.877702 kubelet[2708]: I1112 17:39:37.877669 2708 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:39:37.882622 kubelet[2708]: I1112 17:39:37.882584 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:39:37.883513 kubelet[2708]: I1112 17:39:37.883479 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:39:37.883513 kubelet[2708]: I1112 17:39:37.883508 2708 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:39:37.883593 kubelet[2708]: I1112 17:39:37.883531 2708 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:39:37.883593 kubelet[2708]: E1112 17:39:37.883583 2708 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:39:37.925195 kubelet[2708]: I1112 17:39:37.925160 2708 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:39:37.925195 kubelet[2708]: I1112 17:39:37.925187 2708 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:39:37.925195 kubelet[2708]: I1112 17:39:37.925207 2708 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:39:37.925396 kubelet[2708]: I1112 17:39:37.925375 2708 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:39:37.925434 kubelet[2708]: I1112 17:39:37.925401 2708 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:39:37.925434 kubelet[2708]: I1112 17:39:37.925409 2708 policy_none.go:49] "None policy: Start" Nov 12 17:39:37.926059 kubelet[2708]: I1112 17:39:37.926042 2708 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:39:37.926111 kubelet[2708]: I1112 17:39:37.926067 2708 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:39:37.926221 kubelet[2708]: I1112 17:39:37.926205 2708 state_mem.go:75] "Updated machine memory state" Nov 12 17:39:37.927398 kubelet[2708]: I1112 17:39:37.927380 2708 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:39:37.928221 kubelet[2708]: I1112 17:39:37.927617 2708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:39:37.977746 kubelet[2708]: I1112 17:39:37.977699 2708 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:39:37.983726 kubelet[2708]: I1112 17:39:37.983631 2708 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 17:39:37.983810 kubelet[2708]: I1112 17:39:37.983741 2708 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:39:37.984408 kubelet[2708]: I1112 17:39:37.984062 2708 topology_manager.go:215] "Topology Admit Handler" podUID="178808dbae0e3eeb6293fa016130b529" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:39:37.984408 kubelet[2708]: I1112 17:39:37.984134 2708 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:39:37.984408 kubelet[2708]: I1112 17:39:37.984259 2708 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:39:37.988796 kubelet[2708]: E1112 17:39:37.988265 2708 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:37.992684 kubelet[2708]: E1112 17:39:37.992553 2708 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 17:39:38.075913 kubelet[2708]: I1112 17:39:38.075875 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:38.075913 kubelet[2708]: I1112 17:39:38.075923 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:38.076062 kubelet[2708]: I1112 17:39:38.075943 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:38.076062 kubelet[2708]: I1112 17:39:38.075963 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:38.076062 kubelet[2708]: I1112 17:39:38.075996 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:38.076062 kubelet[2708]: I1112 17:39:38.076016 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:39:38.076062 kubelet[2708]: I1112 17:39:38.076036 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/178808dbae0e3eeb6293fa016130b529-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"178808dbae0e3eeb6293fa016130b529\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:38.076236 kubelet[2708]: I1112 17:39:38.076055 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/178808dbae0e3eeb6293fa016130b529-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"178808dbae0e3eeb6293fa016130b529\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:38.076236 kubelet[2708]: I1112 17:39:38.076077 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/178808dbae0e3eeb6293fa016130b529-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"178808dbae0e3eeb6293fa016130b529\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:38.290161 kubelet[2708]: E1112 17:39:38.290094 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:38.293848 kubelet[2708]: E1112 17:39:38.293619 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:38.294543 kubelet[2708]: E1112 17:39:38.294516 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:38.324655 sudo[2723]: pam_unix(sudo:session): session closed for user root Nov 12 17:39:38.852608 kubelet[2708]: I1112 17:39:38.852573 2708 apiserver.go:52] "Watching apiserver" Nov 12 17:39:38.873231 kubelet[2708]: I1112 17:39:38.873204 2708 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:39:38.903945 kubelet[2708]: E1112 17:39:38.903921 2708 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 17:39:38.904215 kubelet[2708]: E1112 17:39:38.904202 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:38.905149 kubelet[2708]: E1112 17:39:38.905128 2708 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 17:39:38.906329 kubelet[2708]: E1112 17:39:38.906262 2708 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 17:39:38.906424 kubelet[2708]: E1112 17:39:38.906405 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:38.906924 kubelet[2708]: E1112 17:39:38.906907 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:38.933448 kubelet[2708]: I1112 17:39:38.933410 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.933369256 podStartE2EDuration="1.933369256s" podCreationTimestamp="2024-11-12 17:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:39:38.933144179 +0000 UTC m=+1.144108036" watchObservedRunningTime="2024-11-12 17:39:38.933369256 +0000 UTC m=+1.144333113" Nov 12 17:39:38.949518 kubelet[2708]: I1112 17:39:38.949467 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.949418993 podStartE2EDuration="1.949418993s" podCreationTimestamp="2024-11-12 17:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:39:38.948285851 +0000 UTC m=+1.159249708" watchObservedRunningTime="2024-11-12 17:39:38.949418993 +0000 UTC m=+1.160382850" Nov 12 17:39:38.956451 kubelet[2708]: I1112 17:39:38.956407 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.956365959 podStartE2EDuration="1.956365959s" podCreationTimestamp="2024-11-12 17:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:39:38.956116763 +0000 UTC m=+1.167080620" watchObservedRunningTime="2024-11-12 17:39:38.956365959 +0000 UTC m=+1.167329816" Nov 12 17:39:39.900041 kubelet[2708]: E1112 17:39:39.900005 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:39.900411 kubelet[2708]: E1112 17:39:39.900063 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:39.900579 kubelet[2708]: E1112 17:39:39.900501 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:40.568239 sudo[1744]: pam_unix(sudo:session): session closed for user root Nov 12 17:39:40.570876 sshd[1737]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:40.574951 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:43262.service: Deactivated successfully. Nov 12 17:39:40.577858 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:39:40.578033 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:39:40.579112 systemd-logind[1515]: Removed session 7. Nov 12 17:39:41.980436 kubelet[2708]: E1112 17:39:41.980097 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:42.501901 kubelet[2708]: E1112 17:39:42.501564 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:46.481839 kubelet[2708]: E1112 17:39:46.481759 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:46.909074 kubelet[2708]: E1112 17:39:46.908950 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:51.182439 kubelet[2708]: I1112 17:39:51.182402 2708 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:39:51.183198 kubelet[2708]: I1112 17:39:51.182950 2708 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:39:51.183227 containerd[1535]: time="2024-11-12T17:39:51.182746267Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:39:51.929025 kubelet[2708]: I1112 17:39:51.928987 2708 topology_manager.go:215] "Topology Admit Handler" podUID="9db77e23-8b1d-42b9-a18e-4286fc84e748" podNamespace="kube-system" podName="kube-proxy-6gd68" Nov 12 17:39:51.932362 kubelet[2708]: I1112 17:39:51.931922 2708 topology_manager.go:215] "Topology Admit Handler" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" podNamespace="kube-system" podName="cilium-d8mzn" Nov 12 17:39:51.960541 kubelet[2708]: I1112 17:39:51.960495 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-etc-cni-netd\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960541 kubelet[2708]: I1112 17:39:51.960541 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cni-path\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960678 kubelet[2708]: I1112 17:39:51.960563 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hostproc\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960678 kubelet[2708]: I1112 17:39:51.960586 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-xtables-lock\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960678 kubelet[2708]: I1112 17:39:51.960607 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6xs\" (UniqueName: \"kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-kube-api-access-8v6xs\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960678 kubelet[2708]: I1112 17:39:51.960626 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-cgroup\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960678 kubelet[2708]: I1112 17:39:51.960645 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9db77e23-8b1d-42b9-a18e-4286fc84e748-kube-proxy\") pod \"kube-proxy-6gd68\" (UID: \"9db77e23-8b1d-42b9-a18e-4286fc84e748\") " pod="kube-system/kube-proxy-6gd68" Nov 12 17:39:51.960823 kubelet[2708]: I1112 17:39:51.960664 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthrg\" (UniqueName: \"kubernetes.io/projected/9db77e23-8b1d-42b9-a18e-4286fc84e748-kube-api-access-zthrg\") pod \"kube-proxy-6gd68\" (UID: \"9db77e23-8b1d-42b9-a18e-4286fc84e748\") " pod="kube-system/kube-proxy-6gd68" Nov 12 17:39:51.960823 kubelet[2708]: I1112 17:39:51.960684 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-kernel\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960823 kubelet[2708]: I1112 17:39:51.960717 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-net\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.960823 kubelet[2708]: I1112 17:39:51.960739 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9db77e23-8b1d-42b9-a18e-4286fc84e748-lib-modules\") pod \"kube-proxy-6gd68\" (UID: \"9db77e23-8b1d-42b9-a18e-4286fc84e748\") " pod="kube-system/kube-proxy-6gd68" Nov 12 17:39:51.960823 kubelet[2708]: I1112 17:39:51.960761 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-config-path\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.961355 kubelet[2708]: I1112 17:39:51.960780 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9db77e23-8b1d-42b9-a18e-4286fc84e748-xtables-lock\") pod \"kube-proxy-6gd68\" (UID: \"9db77e23-8b1d-42b9-a18e-4286fc84e748\") " pod="kube-system/kube-proxy-6gd68" Nov 12 17:39:51.961355 kubelet[2708]: I1112 17:39:51.960801 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-run\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.961355 kubelet[2708]: I1112 17:39:51.960819 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-bpf-maps\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.961355 kubelet[2708]: I1112 17:39:51.960839 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-lib-modules\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.961355 kubelet[2708]: I1112 17:39:51.960857 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hubble-tls\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.961355 kubelet[2708]: I1112 17:39:51.960879 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fa23bb9-c87a-4caf-83d4-5b77757f356e-clustermesh-secrets\") pod \"cilium-d8mzn\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " pod="kube-system/cilium-d8mzn" Nov 12 17:39:51.988425 kubelet[2708]: E1112 17:39:51.988382 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.071644 kubelet[2708]: E1112 17:39:52.071598 2708 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 17:39:52.073822 kubelet[2708]: E1112 17:39:52.073596 2708 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 17:39:52.073822 kubelet[2708]: E1112 17:39:52.073630 2708 projected.go:200] Error preparing data for projected volume kube-api-access-zthrg for pod kube-system/kube-proxy-6gd68: configmap "kube-root-ca.crt" not found Nov 12 17:39:52.073822 kubelet[2708]: E1112 17:39:52.073693 2708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9db77e23-8b1d-42b9-a18e-4286fc84e748-kube-api-access-zthrg podName:9db77e23-8b1d-42b9-a18e-4286fc84e748 nodeName:}" failed. No retries permitted until 2024-11-12 17:39:52.573674635 +0000 UTC m=+14.784638452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zthrg" (UniqueName: "kubernetes.io/projected/9db77e23-8b1d-42b9-a18e-4286fc84e748-kube-api-access-zthrg") pod "kube-proxy-6gd68" (UID: "9db77e23-8b1d-42b9-a18e-4286fc84e748") : configmap "kube-root-ca.crt" not found Nov 12 17:39:52.075271 kubelet[2708]: E1112 17:39:52.075161 2708 projected.go:200] Error preparing data for projected volume kube-api-access-8v6xs for pod kube-system/cilium-d8mzn: configmap "kube-root-ca.crt" not found Nov 12 17:39:52.075271 kubelet[2708]: E1112 17:39:52.075224 2708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-kube-api-access-8v6xs podName:2fa23bb9-c87a-4caf-83d4-5b77757f356e nodeName:}" failed. No retries permitted until 2024-11-12 17:39:52.575209305 +0000 UTC m=+14.786173162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8v6xs" (UniqueName: "kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-kube-api-access-8v6xs") pod "cilium-d8mzn" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e") : configmap "kube-root-ca.crt" not found Nov 12 17:39:52.116823 update_engine[1522]: I20241112 17:39:52.116752 1522 update_attempter.cc:509] Updating boot flags... Nov 12 17:39:52.143621 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2795) Nov 12 17:39:52.170748 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2795) Nov 12 17:39:52.227775 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2795) Nov 12 17:39:52.268922 kubelet[2708]: I1112 17:39:52.268880 2708 topology_manager.go:215] "Topology Admit Handler" podUID="c0ba9503-0caa-4204-ba49-df00b3a0d32f" podNamespace="kube-system" podName="cilium-operator-5cc964979-gltkl" Nov 12 17:39:52.364935 kubelet[2708]: I1112 17:39:52.364892 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgl7\" (UniqueName: \"kubernetes.io/projected/c0ba9503-0caa-4204-ba49-df00b3a0d32f-kube-api-access-dbgl7\") pod \"cilium-operator-5cc964979-gltkl\" (UID: \"c0ba9503-0caa-4204-ba49-df00b3a0d32f\") " pod="kube-system/cilium-operator-5cc964979-gltkl" Nov 12 17:39:52.364935 kubelet[2708]: I1112 17:39:52.364943 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ba9503-0caa-4204-ba49-df00b3a0d32f-cilium-config-path\") pod \"cilium-operator-5cc964979-gltkl\" (UID: \"c0ba9503-0caa-4204-ba49-df00b3a0d32f\") " pod="kube-system/cilium-operator-5cc964979-gltkl" Nov 12 17:39:52.510895 kubelet[2708]: E1112 17:39:52.510791 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.572834 kubelet[2708]: E1112 17:39:52.572794 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.573544 containerd[1535]: time="2024-11-12T17:39:52.573504719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gltkl,Uid:c0ba9503-0caa-4204-ba49-df00b3a0d32f,Namespace:kube-system,Attempt:0,}" Nov 12 17:39:52.595288 containerd[1535]: time="2024-11-12T17:39:52.594991657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:39:52.595288 containerd[1535]: time="2024-11-12T17:39:52.595060816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:39:52.595288 containerd[1535]: time="2024-11-12T17:39:52.595075776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:52.595288 containerd[1535]: time="2024-11-12T17:39:52.595201335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:52.638235 containerd[1535]: time="2024-11-12T17:39:52.638162450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gltkl,Uid:c0ba9503-0caa-4204-ba49-df00b3a0d32f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5\"" Nov 12 17:39:52.639087 kubelet[2708]: E1112 17:39:52.639065 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.641664 containerd[1535]: time="2024-11-12T17:39:52.641003791Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 17:39:52.835338 kubelet[2708]: E1112 17:39:52.835039 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.836263 containerd[1535]: time="2024-11-12T17:39:52.836218136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gd68,Uid:9db77e23-8b1d-42b9-a18e-4286fc84e748,Namespace:kube-system,Attempt:0,}" Nov 12 17:39:52.841451 kubelet[2708]: E1112 17:39:52.841420 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.842384 containerd[1535]: time="2024-11-12T17:39:52.841819419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8mzn,Uid:2fa23bb9-c87a-4caf-83d4-5b77757f356e,Namespace:kube-system,Attempt:0,}" Nov 12 17:39:52.858439 containerd[1535]: time="2024-11-12T17:39:52.858356550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:39:52.858439 containerd[1535]: time="2024-11-12T17:39:52.858414189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:39:52.858633 containerd[1535]: time="2024-11-12T17:39:52.858425629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:52.858633 containerd[1535]: time="2024-11-12T17:39:52.858519629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:52.860130 containerd[1535]: time="2024-11-12T17:39:52.860044898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:39:52.860130 containerd[1535]: time="2024-11-12T17:39:52.860098218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:39:52.860314 containerd[1535]: time="2024-11-12T17:39:52.860115458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:52.860404 containerd[1535]: time="2024-11-12T17:39:52.860272457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:39:52.896796 containerd[1535]: time="2024-11-12T17:39:52.896755055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8mzn,Uid:2fa23bb9-c87a-4caf-83d4-5b77757f356e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\"" Nov 12 17:39:52.897391 kubelet[2708]: E1112 17:39:52.897371 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.904063 containerd[1535]: time="2024-11-12T17:39:52.904023367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gd68,Uid:9db77e23-8b1d-42b9-a18e-4286fc84e748,Namespace:kube-system,Attempt:0,} returns sandbox id \"c587445e076880f5c1a7ee056b4c89f8defbe27322fe1b819806770ce9565e27\"" Nov 12 17:39:52.905062 kubelet[2708]: E1112 17:39:52.905026 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:52.908766 containerd[1535]: time="2024-11-12T17:39:52.908728295Z" level=info msg="CreateContainer within sandbox \"c587445e076880f5c1a7ee056b4c89f8defbe27322fe1b819806770ce9565e27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:39:52.955294 containerd[1535]: time="2024-11-12T17:39:52.955241307Z" level=info msg="CreateContainer within sandbox \"c587445e076880f5c1a7ee056b4c89f8defbe27322fe1b819806770ce9565e27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fec748ecab6678edcfeffedf4c08b07052df115eb5e586bb1acba768b27e7af6\"" Nov 12 17:39:52.956072 containerd[1535]: time="2024-11-12T17:39:52.955845503Z" level=info msg="StartContainer for \"fec748ecab6678edcfeffedf4c08b07052df115eb5e586bb1acba768b27e7af6\"" Nov 12 17:39:53.004357 containerd[1535]: time="2024-11-12T17:39:53.004301743Z" level=info msg="StartContainer for \"fec748ecab6678edcfeffedf4c08b07052df115eb5e586bb1acba768b27e7af6\" returns successfully" Nov 12 17:39:53.929121 kubelet[2708]: E1112 17:39:53.929046 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:53.938602 kubelet[2708]: I1112 17:39:53.938568 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6gd68" podStartSLOduration=2.938531453 podStartE2EDuration="2.938531453s" podCreationTimestamp="2024-11-12 17:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:39:53.937781817 +0000 UTC m=+16.148745674" watchObservedRunningTime="2024-11-12 17:39:53.938531453 +0000 UTC m=+16.149495310" Nov 12 17:39:54.764110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884867158.mount: Deactivated successfully. Nov 12 17:39:54.931434 kubelet[2708]: E1112 17:39:54.931100 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:56.053142 containerd[1535]: time="2024-11-12T17:39:56.053083981Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:56.053748 containerd[1535]: time="2024-11-12T17:39:56.053699538Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138294" Nov 12 17:39:56.054550 containerd[1535]: time="2024-11-12T17:39:56.054505734Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:39:56.055917 containerd[1535]: time="2024-11-12T17:39:56.055884007Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.414832096s" Nov 12 17:39:56.055991 containerd[1535]: time="2024-11-12T17:39:56.055922087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 12 17:39:56.061390 containerd[1535]: time="2024-11-12T17:39:56.061323899Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 17:39:56.066042 containerd[1535]: time="2024-11-12T17:39:56.065997555Z" level=info msg="CreateContainer within sandbox \"ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 17:39:56.075554 containerd[1535]: time="2024-11-12T17:39:56.075495906Z" level=info msg="CreateContainer within sandbox \"ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\"" Nov 12 17:39:56.076163 containerd[1535]: time="2024-11-12T17:39:56.075953744Z" level=info msg="StartContainer for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\"" Nov 12 17:39:56.124849 containerd[1535]: time="2024-11-12T17:39:56.124808974Z" level=info msg="StartContainer for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" returns successfully" Nov 12 17:39:56.947150 kubelet[2708]: E1112 17:39:56.947051 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:56.970883 kubelet[2708]: I1112 17:39:56.970750 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gltkl" podStartSLOduration=1.552711242 podStartE2EDuration="4.970690639s" podCreationTimestamp="2024-11-12 17:39:52 +0000 UTC" firstStartedPulling="2024-11-12 17:39:52.63978836 +0000 UTC m=+14.850752217" lastFinishedPulling="2024-11-12 17:39:56.057767757 +0000 UTC m=+18.268731614" observedRunningTime="2024-11-12 17:39:56.97050636 +0000 UTC m=+19.181470217" watchObservedRunningTime="2024-11-12 17:39:56.970690639 +0000 UTC m=+19.181654496" Nov 12 17:39:57.946088 kubelet[2708]: E1112 17:39:57.945745 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:01.507434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725385799.mount: Deactivated successfully. Nov 12 17:40:02.787658 containerd[1535]: time="2024-11-12T17:40:02.787597308Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:40:02.788230 containerd[1535]: time="2024-11-12T17:40:02.788177746Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651510" Nov 12 17:40:02.789026 containerd[1535]: time="2024-11-12T17:40:02.789002703Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:40:02.790770 containerd[1535]: time="2024-11-12T17:40:02.790497818Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.729136839s" Nov 12 17:40:02.790770 containerd[1535]: time="2024-11-12T17:40:02.790530738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 12 17:40:02.793312 containerd[1535]: time="2024-11-12T17:40:02.793282248Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 17:40:02.818646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78674782.mount: Deactivated successfully. Nov 12 17:40:02.822608 containerd[1535]: time="2024-11-12T17:40:02.822562906Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\"" Nov 12 17:40:02.823181 containerd[1535]: time="2024-11-12T17:40:02.823145544Z" level=info msg="StartContainer for \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\"" Nov 12 17:40:02.869411 containerd[1535]: time="2024-11-12T17:40:02.867093071Z" level=info msg="StartContainer for \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\" returns successfully" Nov 12 17:40:02.956729 kubelet[2708]: E1112 17:40:02.956682 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:03.080909 containerd[1535]: time="2024-11-12T17:40:03.080764345Z" level=info msg="shim disconnected" id=5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9 namespace=k8s.io Nov 12 17:40:03.080909 containerd[1535]: time="2024-11-12T17:40:03.080820865Z" level=warning msg="cleaning up after shim disconnected" id=5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9 namespace=k8s.io Nov 12 17:40:03.080909 containerd[1535]: time="2024-11-12T17:40:03.080839305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:03.816341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9-rootfs.mount: Deactivated successfully. Nov 12 17:40:03.962445 kubelet[2708]: E1112 17:40:03.962421 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:03.966443 containerd[1535]: time="2024-11-12T17:40:03.966287137Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 17:40:04.000623 containerd[1535]: time="2024-11-12T17:40:04.000583505Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\"" Nov 12 17:40:04.001745 containerd[1535]: time="2024-11-12T17:40:04.001449183Z" level=info msg="StartContainer for \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\"" Nov 12 17:40:04.045953 containerd[1535]: time="2024-11-12T17:40:04.045914127Z" level=info msg="StartContainer for \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\" returns successfully" Nov 12 17:40:04.079624 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:40:04.079910 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:04.079976 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:40:04.086970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:40:04.098752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:04.102181 containerd[1535]: time="2024-11-12T17:40:04.102100515Z" level=info msg="shim disconnected" id=e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423 namespace=k8s.io Nov 12 17:40:04.102181 containerd[1535]: time="2024-11-12T17:40:04.102157715Z" level=warning msg="cleaning up after shim disconnected" id=e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423 namespace=k8s.io Nov 12 17:40:04.102181 containerd[1535]: time="2024-11-12T17:40:04.102166035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:04.816547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423-rootfs.mount: Deactivated successfully. Nov 12 17:40:04.966867 kubelet[2708]: E1112 17:40:04.966288 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:04.971544 containerd[1535]: time="2024-11-12T17:40:04.971158937Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 17:40:05.011290 containerd[1535]: time="2024-11-12T17:40:05.011239937Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\"" Nov 12 17:40:05.011789 containerd[1535]: time="2024-11-12T17:40:05.011762455Z" level=info msg="StartContainer for \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\"" Nov 12 17:40:05.039787 systemd[1]: run-containerd-runc-k8s.io-6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4-runc.Ni1XEf.mount: Deactivated successfully. Nov 12 17:40:05.079119 containerd[1535]: time="2024-11-12T17:40:05.079004263Z" level=info msg="StartContainer for \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\" returns successfully" Nov 12 17:40:05.145975 containerd[1535]: time="2024-11-12T17:40:05.145920791Z" level=info msg="shim disconnected" id=6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4 namespace=k8s.io Nov 12 17:40:05.146497 containerd[1535]: time="2024-11-12T17:40:05.146287510Z" level=warning msg="cleaning up after shim disconnected" id=6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4 namespace=k8s.io Nov 12 17:40:05.146497 containerd[1535]: time="2024-11-12T17:40:05.146328190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:05.816528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4-rootfs.mount: Deactivated successfully. Nov 12 17:40:05.970427 kubelet[2708]: E1112 17:40:05.970261 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:05.980018 containerd[1535]: time="2024-11-12T17:40:05.979234842Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 17:40:05.992724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139590676.mount: Deactivated successfully. Nov 12 17:40:05.996954 containerd[1535]: time="2024-11-12T17:40:05.996325633Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\"" Nov 12 17:40:05.999472 containerd[1535]: time="2024-11-12T17:40:05.998653906Z" level=info msg="StartContainer for \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\"" Nov 12 17:40:06.041938 containerd[1535]: time="2024-11-12T17:40:06.041847150Z" level=info msg="StartContainer for \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\" returns successfully" Nov 12 17:40:06.062469 containerd[1535]: time="2024-11-12T17:40:06.062216815Z" level=info msg="shim disconnected" id=c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a namespace=k8s.io Nov 12 17:40:06.062469 containerd[1535]: time="2024-11-12T17:40:06.062279055Z" level=warning msg="cleaning up after shim disconnected" id=c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a namespace=k8s.io Nov 12 17:40:06.062469 containerd[1535]: time="2024-11-12T17:40:06.062287815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:06.816632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a-rootfs.mount: Deactivated successfully. Nov 12 17:40:06.975245 kubelet[2708]: E1112 17:40:06.974831 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:06.978659 containerd[1535]: time="2024-11-12T17:40:06.978586472Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 17:40:06.997903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022723294.mount: Deactivated successfully. Nov 12 17:40:07.021125 containerd[1535]: time="2024-11-12T17:40:07.021073522Z" level=info msg="CreateContainer within sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\"" Nov 12 17:40:07.021983 containerd[1535]: time="2024-11-12T17:40:07.021556120Z" level=info msg="StartContainer for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\"" Nov 12 17:40:07.100918 containerd[1535]: time="2024-11-12T17:40:07.100787001Z" level=info msg="StartContainer for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" returns successfully" Nov 12 17:40:07.297467 kubelet[2708]: I1112 17:40:07.297424 2708 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:40:07.327400 kubelet[2708]: I1112 17:40:07.326930 2708 topology_manager.go:215] "Topology Admit Handler" podUID="fc524683-e9bf-463f-9659-26b2f97da241" podNamespace="kube-system" podName="coredns-76f75df574-zz7mv" Nov 12 17:40:07.329741 kubelet[2708]: I1112 17:40:07.327929 2708 topology_manager.go:215] "Topology Admit Handler" podUID="2bb0f94f-431e-4d22-8be0-21d99907ae97" podNamespace="kube-system" podName="coredns-76f75df574-2s9r7" Nov 12 17:40:07.473573 kubelet[2708]: I1112 17:40:07.473460 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bb0f94f-431e-4d22-8be0-21d99907ae97-config-volume\") pod \"coredns-76f75df574-2s9r7\" (UID: \"2bb0f94f-431e-4d22-8be0-21d99907ae97\") " pod="kube-system/coredns-76f75df574-2s9r7" Nov 12 17:40:07.473573 kubelet[2708]: I1112 17:40:07.473507 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc524683-e9bf-463f-9659-26b2f97da241-config-volume\") pod \"coredns-76f75df574-zz7mv\" (UID: \"fc524683-e9bf-463f-9659-26b2f97da241\") " pod="kube-system/coredns-76f75df574-zz7mv" Nov 12 17:40:07.473769 kubelet[2708]: I1112 17:40:07.473605 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhzl\" (UniqueName: \"kubernetes.io/projected/2bb0f94f-431e-4d22-8be0-21d99907ae97-kube-api-access-nkhzl\") pod \"coredns-76f75df574-2s9r7\" (UID: \"2bb0f94f-431e-4d22-8be0-21d99907ae97\") " pod="kube-system/coredns-76f75df574-2s9r7" Nov 12 17:40:07.473769 kubelet[2708]: I1112 17:40:07.473673 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766zs\" (UniqueName: \"kubernetes.io/projected/fc524683-e9bf-463f-9659-26b2f97da241-kube-api-access-766zs\") pod \"coredns-76f75df574-zz7mv\" (UID: \"fc524683-e9bf-463f-9659-26b2f97da241\") " pod="kube-system/coredns-76f75df574-zz7mv" Nov 12 17:40:07.631215 kubelet[2708]: E1112 17:40:07.631181 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:07.632551 kubelet[2708]: E1112 17:40:07.632124 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:07.632654 containerd[1535]: time="2024-11-12T17:40:07.632034982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2s9r7,Uid:2bb0f94f-431e-4d22-8be0-21d99907ae97,Namespace:kube-system,Attempt:0,}" Nov 12 17:40:07.633061 containerd[1535]: time="2024-11-12T17:40:07.633026860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zz7mv,Uid:fc524683-e9bf-463f-9659-26b2f97da241,Namespace:kube-system,Attempt:0,}" Nov 12 17:40:07.978952 kubelet[2708]: E1112 17:40:07.978923 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:08.981084 kubelet[2708]: E1112 17:40:08.981042 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:09.309539 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:41896.service - OpenSSH per-connection server daemon (10.0.0.1:41896). Nov 12 17:40:09.339287 systemd-networkd[1234]: cilium_host: Link UP Nov 12 17:40:09.339422 systemd-networkd[1234]: cilium_net: Link UP Nov 12 17:40:09.339547 systemd-networkd[1234]: cilium_net: Gained carrier Nov 12 17:40:09.339659 systemd-networkd[1234]: cilium_host: Gained carrier Nov 12 17:40:09.365555 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 41896 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:09.367547 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:09.373424 systemd-logind[1515]: New session 8 of user core. Nov 12 17:40:09.376973 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:40:09.447566 systemd-networkd[1234]: cilium_vxlan: Link UP Nov 12 17:40:09.447577 systemd-networkd[1234]: cilium_vxlan: Gained carrier Nov 12 17:40:09.518692 sshd[3555]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:09.523526 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:41896.service: Deactivated successfully. Nov 12 17:40:09.526151 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:40:09.526193 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:40:09.527619 systemd-logind[1515]: Removed session 8. Nov 12 17:40:09.592875 systemd-networkd[1234]: cilium_net: Gained IPv6LL Nov 12 17:40:09.632847 systemd-networkd[1234]: cilium_host: Gained IPv6LL Nov 12 17:40:09.752895 kernel: NET: Registered PF_ALG protocol family Nov 12 17:40:09.982883 kubelet[2708]: E1112 17:40:09.982838 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:10.356949 systemd-networkd[1234]: lxc_health: Link UP Nov 12 17:40:10.361723 systemd-networkd[1234]: lxc_health: Gained carrier Nov 12 17:40:10.801039 systemd-networkd[1234]: lxc4f2806c13011: Link UP Nov 12 17:40:10.802722 kernel: eth0: renamed from tmp3a698 Nov 12 17:40:10.807887 kernel: eth0: renamed from tmpcce1b Nov 12 17:40:10.813964 systemd-networkd[1234]: lxcdbd0b75f13a8: Link UP Nov 12 17:40:10.815768 systemd-networkd[1234]: lxc4f2806c13011: Gained carrier Nov 12 17:40:10.815942 systemd-networkd[1234]: lxcdbd0b75f13a8: Gained carrier Nov 12 17:40:10.875416 kubelet[2708]: I1112 17:40:10.875379 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d8mzn" podStartSLOduration=9.983335377 podStartE2EDuration="19.875332073s" podCreationTimestamp="2024-11-12 17:39:51 +0000 UTC" firstStartedPulling="2024-11-12 17:39:52.898876961 +0000 UTC m=+15.109840818" lastFinishedPulling="2024-11-12 17:40:02.790873657 +0000 UTC m=+25.001837514" observedRunningTime="2024-11-12 17:40:08.011297828 +0000 UTC m=+30.222261685" watchObservedRunningTime="2024-11-12 17:40:10.875332073 +0000 UTC m=+33.086295930" Nov 12 17:40:10.985299 kubelet[2708]: E1112 17:40:10.984701 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:11.138188 systemd-networkd[1234]: cilium_vxlan: Gained IPv6LL Nov 12 17:40:11.986002 kubelet[2708]: E1112 17:40:11.985962 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:12.033117 systemd-networkd[1234]: lxc_health: Gained IPv6LL Nov 12 17:40:12.417186 systemd-networkd[1234]: lxc4f2806c13011: Gained IPv6LL Nov 12 17:40:12.417467 systemd-networkd[1234]: lxcdbd0b75f13a8: Gained IPv6LL Nov 12 17:40:12.987164 kubelet[2708]: E1112 17:40:12.987129 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:14.447125 containerd[1535]: time="2024-11-12T17:40:14.447034818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:40:14.448018 containerd[1535]: time="2024-11-12T17:40:14.447883942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:40:14.448018 containerd[1535]: time="2024-11-12T17:40:14.447904262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:40:14.448018 containerd[1535]: time="2024-11-12T17:40:14.447996462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:40:14.449681 containerd[1535]: time="2024-11-12T17:40:14.449509788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:40:14.450000 containerd[1535]: time="2024-11-12T17:40:14.449810350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:40:14.450000 containerd[1535]: time="2024-11-12T17:40:14.449833870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:40:14.454246 containerd[1535]: time="2024-11-12T17:40:14.451321276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:40:14.466258 systemd[1]: run-containerd-runc-k8s.io-3a698d9b16ba81c7e8823f23161b2bc7054fcf2370c824d890e14d7f50052fa0-runc.20jP38.mount: Deactivated successfully. Nov 12 17:40:14.478278 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:40:14.482044 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:40:14.501730 containerd[1535]: time="2024-11-12T17:40:14.501522121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zz7mv,Uid:fc524683-e9bf-463f-9659-26b2f97da241,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a698d9b16ba81c7e8823f23161b2bc7054fcf2370c824d890e14d7f50052fa0\"" Nov 12 17:40:14.502421 containerd[1535]: time="2024-11-12T17:40:14.502309284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2s9r7,Uid:2bb0f94f-431e-4d22-8be0-21d99907ae97,Namespace:kube-system,Attempt:0,} returns sandbox id \"cce1bc2e40ce3d30d47acb17ab53f26233ec1d182e62713a16b277a1f5738368\"" Nov 12 17:40:14.502499 kubelet[2708]: E1112 17:40:14.502372 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:14.503222 kubelet[2708]: E1112 17:40:14.503200 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:14.504648 containerd[1535]: time="2024-11-12T17:40:14.504602333Z" level=info msg="CreateContainer within sandbox \"3a698d9b16ba81c7e8823f23161b2bc7054fcf2370c824d890e14d7f50052fa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:40:14.522299 containerd[1535]: time="2024-11-12T17:40:14.522256565Z" level=info msg="CreateContainer within sandbox \"3a698d9b16ba81c7e8823f23161b2bc7054fcf2370c824d890e14d7f50052fa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b625451cc251b7aed5dc1deabc31f655f98b42937028d7807f2ae84e2fc8077\"" Nov 12 17:40:14.522988 containerd[1535]: time="2024-11-12T17:40:14.522954128Z" level=info msg="StartContainer for \"7b625451cc251b7aed5dc1deabc31f655f98b42937028d7807f2ae84e2fc8077\"" Nov 12 17:40:14.527007 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:42892.service - OpenSSH per-connection server daemon (10.0.0.1:42892). Nov 12 17:40:14.536207 containerd[1535]: time="2024-11-12T17:40:14.536170542Z" level=info msg="CreateContainer within sandbox \"cce1bc2e40ce3d30d47acb17ab53f26233ec1d182e62713a16b277a1f5738368\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:40:14.548859 containerd[1535]: time="2024-11-12T17:40:14.548817474Z" level=info msg="CreateContainer within sandbox \"cce1bc2e40ce3d30d47acb17ab53f26233ec1d182e62713a16b277a1f5738368\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c05d995afa4662c8ef0680ff27d65223a436b9290f9fe30ca98cb6807bc95b96\"" Nov 12 17:40:14.549485 containerd[1535]: time="2024-11-12T17:40:14.549404716Z" level=info msg="StartContainer for \"c05d995afa4662c8ef0680ff27d65223a436b9290f9fe30ca98cb6807bc95b96\"" Nov 12 17:40:14.566643 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 42892 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:14.568302 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:14.578427 systemd-logind[1515]: New session 9 of user core. Nov 12 17:40:14.580459 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:40:14.590773 containerd[1535]: time="2024-11-12T17:40:14.590247763Z" level=info msg="StartContainer for \"7b625451cc251b7aed5dc1deabc31f655f98b42937028d7807f2ae84e2fc8077\" returns successfully" Nov 12 17:40:14.608578 containerd[1535]: time="2024-11-12T17:40:14.608531958Z" level=info msg="StartContainer for \"c05d995afa4662c8ef0680ff27d65223a436b9290f9fe30ca98cb6807bc95b96\" returns successfully" Nov 12 17:40:14.736937 sshd[4036]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:14.741252 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:42892.service: Deactivated successfully. Nov 12 17:40:14.744693 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:40:14.746516 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:40:14.747623 systemd-logind[1515]: Removed session 9. Nov 12 17:40:14.992852 kubelet[2708]: E1112 17:40:14.992427 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:14.993898 kubelet[2708]: E1112 17:40:14.993833 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:15.008021 kubelet[2708]: I1112 17:40:15.007931 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zz7mv" podStartSLOduration=23.007889028 podStartE2EDuration="23.007889028s" podCreationTimestamp="2024-11-12 17:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:40:15.007051184 +0000 UTC m=+37.218015041" watchObservedRunningTime="2024-11-12 17:40:15.007889028 +0000 UTC m=+37.218852845" Nov 12 17:40:15.021021 kubelet[2708]: I1112 17:40:15.020564 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2s9r7" podStartSLOduration=23.020519798 podStartE2EDuration="23.020519798s" podCreationTimestamp="2024-11-12 17:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:40:15.020413278 +0000 UTC m=+37.231377535" watchObservedRunningTime="2024-11-12 17:40:15.020519798 +0000 UTC m=+37.231483655" Nov 12 17:40:15.452619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238085280.mount: Deactivated successfully. Nov 12 17:40:15.995623 kubelet[2708]: E1112 17:40:15.995596 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:16.997180 kubelet[2708]: E1112 17:40:16.997144 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:17.633663 kubelet[2708]: E1112 17:40:17.633540 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:17.999375 kubelet[2708]: E1112 17:40:17.999327 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:19.754947 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:42908.service - OpenSSH per-connection server daemon (10.0.0.1:42908). Nov 12 17:40:19.794186 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 42908 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:19.795664 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:19.799383 systemd-logind[1515]: New session 10 of user core. Nov 12 17:40:19.816991 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:40:19.929400 sshd[4137]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:19.932546 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:42908.service: Deactivated successfully. Nov 12 17:40:19.934470 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:40:19.934569 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:40:19.936331 systemd-logind[1515]: Removed session 10. Nov 12 17:40:24.942984 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:35040.service - OpenSSH per-connection server daemon (10.0.0.1:35040). Nov 12 17:40:24.982246 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 35040 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:24.983549 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:24.987995 systemd-logind[1515]: New session 11 of user core. Nov 12 17:40:24.994007 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:40:25.105833 sshd[4156]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:25.117022 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:35050.service - OpenSSH per-connection server daemon (10.0.0.1:35050). Nov 12 17:40:25.117471 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:35040.service: Deactivated successfully. Nov 12 17:40:25.120791 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:40:25.120883 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:40:25.122485 systemd-logind[1515]: Removed session 11. Nov 12 17:40:25.153827 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 35050 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:25.155215 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:25.161999 systemd-logind[1515]: New session 12 of user core. Nov 12 17:40:25.167974 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:40:25.330314 sshd[4169]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:25.339959 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:35054.service - OpenSSH per-connection server daemon (10.0.0.1:35054). Nov 12 17:40:25.340848 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:35050.service: Deactivated successfully. Nov 12 17:40:25.344661 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:40:25.349236 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:40:25.352633 systemd-logind[1515]: Removed session 12. Nov 12 17:40:25.392507 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 35054 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:25.393971 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:25.400165 systemd-logind[1515]: New session 13 of user core. Nov 12 17:40:25.410013 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:40:25.539466 sshd[4182]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:25.542950 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:35054.service: Deactivated successfully. Nov 12 17:40:25.545034 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:40:25.545274 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:40:25.546585 systemd-logind[1515]: Removed session 13. Nov 12 17:40:30.553983 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:35060.service - OpenSSH per-connection server daemon (10.0.0.1:35060). Nov 12 17:40:30.593193 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 35060 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:30.593692 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:30.598404 systemd-logind[1515]: New session 14 of user core. Nov 12 17:40:30.606055 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:40:30.742383 sshd[4200]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:30.747009 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:35060.service: Deactivated successfully. Nov 12 17:40:30.749279 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:40:30.749392 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:40:30.750572 systemd-logind[1515]: Removed session 14. Nov 12 17:40:35.757991 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:37950.service - OpenSSH per-connection server daemon (10.0.0.1:37950). Nov 12 17:40:35.792782 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 37950 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:35.794185 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:35.798528 systemd-logind[1515]: New session 15 of user core. Nov 12 17:40:35.802994 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:40:35.916543 sshd[4216]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:35.925953 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:37966.service - OpenSSH per-connection server daemon (10.0.0.1:37966). Nov 12 17:40:35.926331 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:37950.service: Deactivated successfully. Nov 12 17:40:35.929148 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:40:35.929699 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:40:35.931776 systemd-logind[1515]: Removed session 15. Nov 12 17:40:35.960616 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 37966 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:35.962052 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:35.966152 systemd-logind[1515]: New session 16 of user core. Nov 12 17:40:35.973967 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:40:36.199514 sshd[4228]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:36.208010 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:37974.service - OpenSSH per-connection server daemon (10.0.0.1:37974). Nov 12 17:40:36.208449 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:37966.service: Deactivated successfully. Nov 12 17:40:36.212136 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:40:36.213076 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:40:36.214753 systemd-logind[1515]: Removed session 16. Nov 12 17:40:36.258184 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 37974 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:36.259620 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:36.263690 systemd-logind[1515]: New session 17 of user core. Nov 12 17:40:36.274949 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:40:37.497868 sshd[4241]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:37.509174 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:37984.service - OpenSSH per-connection server daemon (10.0.0.1:37984). Nov 12 17:40:37.511113 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:37974.service: Deactivated successfully. Nov 12 17:40:37.514785 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:40:37.519755 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:40:37.520805 systemd-logind[1515]: Removed session 17. Nov 12 17:40:37.550479 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 37984 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:37.551787 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:37.555426 systemd-logind[1515]: New session 18 of user core. Nov 12 17:40:37.567942 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:40:37.800734 sshd[4265]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:37.807978 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:37988.service - OpenSSH per-connection server daemon (10.0.0.1:37988). Nov 12 17:40:37.808385 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:37984.service: Deactivated successfully. Nov 12 17:40:37.812905 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:40:37.815202 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:40:37.822287 systemd-logind[1515]: Removed session 18. Nov 12 17:40:37.850311 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 37988 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:37.852149 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:37.857054 systemd-logind[1515]: New session 19 of user core. Nov 12 17:40:37.868284 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:40:37.983937 sshd[4277]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:37.987807 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:40:37.988061 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:37988.service: Deactivated successfully. Nov 12 17:40:37.989984 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:40:37.991903 systemd-logind[1515]: Removed session 19. Nov 12 17:40:42.996047 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:52488.service - OpenSSH per-connection server daemon (10.0.0.1:52488). Nov 12 17:40:43.031289 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 52488 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:43.032725 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:43.036638 systemd-logind[1515]: New session 20 of user core. Nov 12 17:40:43.044002 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:40:43.154420 sshd[4301]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:43.157338 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:52488.service: Deactivated successfully. Nov 12 17:40:43.160910 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:40:43.161456 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:40:43.163582 systemd-logind[1515]: Removed session 20. Nov 12 17:40:48.170933 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:52500.service - OpenSSH per-connection server daemon (10.0.0.1:52500). Nov 12 17:40:48.205059 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:48.206321 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:48.210023 systemd-logind[1515]: New session 21 of user core. Nov 12 17:40:48.218922 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 17:40:48.329194 sshd[4317]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:48.332577 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:52500.service: Deactivated successfully. Nov 12 17:40:48.334790 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 17:40:48.334797 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Nov 12 17:40:48.336207 systemd-logind[1515]: Removed session 21. Nov 12 17:40:53.338961 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:41730.service - OpenSSH per-connection server daemon (10.0.0.1:41730). Nov 12 17:40:53.372992 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 41730 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:53.374106 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:53.377763 systemd-logind[1515]: New session 22 of user core. Nov 12 17:40:53.387933 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 17:40:53.493857 sshd[4334]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:53.503924 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:41734.service - OpenSSH per-connection server daemon (10.0.0.1:41734). Nov 12 17:40:53.504288 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:41730.service: Deactivated successfully. Nov 12 17:40:53.506977 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 17:40:53.507781 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Nov 12 17:40:53.509037 systemd-logind[1515]: Removed session 22. Nov 12 17:40:53.538106 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 41734 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:53.539504 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:53.543769 systemd-logind[1515]: New session 23 of user core. Nov 12 17:40:53.555955 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 17:40:55.837762 containerd[1535]: time="2024-11-12T17:40:55.836128927Z" level=info msg="StopContainer for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" with timeout 30 (s)" Nov 12 17:40:55.842989 containerd[1535]: time="2024-11-12T17:40:55.842580776Z" level=info msg="Stop container \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" with signal terminated" Nov 12 17:40:55.864985 containerd[1535]: time="2024-11-12T17:40:55.864867329Z" level=info msg="StopContainer for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" with timeout 2 (s)" Nov 12 17:40:55.865236 containerd[1535]: time="2024-11-12T17:40:55.865115089Z" level=info msg="Stop container \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" with signal terminated" Nov 12 17:40:55.869745 containerd[1535]: time="2024-11-12T17:40:55.869031535Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:40:55.870643 systemd-networkd[1234]: lxc_health: Link DOWN Nov 12 17:40:55.870650 systemd-networkd[1234]: lxc_health: Lost carrier Nov 12 17:40:55.892462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902-rootfs.mount: Deactivated successfully. Nov 12 17:40:55.898907 containerd[1535]: time="2024-11-12T17:40:55.898835659Z" level=info msg="shim disconnected" id=1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902 namespace=k8s.io Nov 12 17:40:55.898907 containerd[1535]: time="2024-11-12T17:40:55.898897539Z" level=warning msg="cleaning up after shim disconnected" id=1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902 namespace=k8s.io Nov 12 17:40:55.898907 containerd[1535]: time="2024-11-12T17:40:55.898906779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:55.909440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593-rootfs.mount: Deactivated successfully. Nov 12 17:40:55.914179 containerd[1535]: time="2024-11-12T17:40:55.913734321Z" level=info msg="shim disconnected" id=67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593 namespace=k8s.io Nov 12 17:40:55.914179 containerd[1535]: time="2024-11-12T17:40:55.913787121Z" level=warning msg="cleaning up after shim disconnected" id=67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593 namespace=k8s.io Nov 12 17:40:55.914179 containerd[1535]: time="2024-11-12T17:40:55.913795761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:55.939661 containerd[1535]: time="2024-11-12T17:40:55.939619199Z" level=info msg="StopContainer for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" returns successfully" Nov 12 17:40:55.940564 containerd[1535]: time="2024-11-12T17:40:55.940413040Z" level=info msg="StopPodSandbox for \"ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5\"" Nov 12 17:40:55.940564 containerd[1535]: time="2024-11-12T17:40:55.940452680Z" level=info msg="Container to stop \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:40:55.942301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5-shm.mount: Deactivated successfully. Nov 12 17:40:55.943106 containerd[1535]: time="2024-11-12T17:40:55.943074604Z" level=info msg="StopContainer for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" returns successfully" Nov 12 17:40:55.943944 containerd[1535]: time="2024-11-12T17:40:55.943914325Z" level=info msg="StopPodSandbox for \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\"" Nov 12 17:40:55.944020 containerd[1535]: time="2024-11-12T17:40:55.944003365Z" level=info msg="Container to stop \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:40:55.944088 containerd[1535]: time="2024-11-12T17:40:55.944019005Z" level=info msg="Container to stop \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:40:55.944088 containerd[1535]: time="2024-11-12T17:40:55.944064325Z" level=info msg="Container to stop \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:40:55.944144 containerd[1535]: time="2024-11-12T17:40:55.944086765Z" level=info msg="Container to stop \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:40:55.944144 containerd[1535]: time="2024-11-12T17:40:55.944098605Z" level=info msg="Container to stop \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 17:40:55.969264 containerd[1535]: time="2024-11-12T17:40:55.969145842Z" level=info msg="shim disconnected" id=7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f namespace=k8s.io Nov 12 17:40:55.969264 containerd[1535]: time="2024-11-12T17:40:55.969198522Z" level=warning msg="cleaning up after shim disconnected" id=7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f namespace=k8s.io Nov 12 17:40:55.969264 containerd[1535]: time="2024-11-12T17:40:55.969207922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:55.977842 containerd[1535]: time="2024-11-12T17:40:55.977785455Z" level=info msg="shim disconnected" id=ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5 namespace=k8s.io Nov 12 17:40:55.977842 containerd[1535]: time="2024-11-12T17:40:55.977838575Z" level=warning msg="cleaning up after shim disconnected" id=ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5 namespace=k8s.io Nov 12 17:40:55.977842 containerd[1535]: time="2024-11-12T17:40:55.977846735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:40:55.983787 containerd[1535]: time="2024-11-12T17:40:55.982814502Z" level=info msg="TearDown network for sandbox \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" successfully" Nov 12 17:40:55.983787 containerd[1535]: time="2024-11-12T17:40:55.982850702Z" level=info msg="StopPodSandbox for \"7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f\" returns successfully" Nov 12 17:40:55.993249 containerd[1535]: time="2024-11-12T17:40:55.993214398Z" level=info msg="TearDown network for sandbox \"ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5\" successfully" Nov 12 17:40:55.993249 containerd[1535]: time="2024-11-12T17:40:55.993246838Z" level=info msg="StopPodSandbox for \"ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5\" returns successfully" Nov 12 17:40:56.085948 kubelet[2708]: I1112 17:40:56.085909 2708 scope.go:117] "RemoveContainer" containerID="67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593" Nov 12 17:40:56.088124 containerd[1535]: time="2024-11-12T17:40:56.087731654Z" level=info msg="RemoveContainer for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\"" Nov 12 17:40:56.092486 containerd[1535]: time="2024-11-12T17:40:56.092450701Z" level=info msg="RemoveContainer for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" returns successfully" Nov 12 17:40:56.093126 kubelet[2708]: I1112 17:40:56.092830 2708 scope.go:117] "RemoveContainer" containerID="c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a" Nov 12 17:40:56.093780 containerd[1535]: time="2024-11-12T17:40:56.093755503Z" level=info msg="RemoveContainer for \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\"" Nov 12 17:40:56.096200 containerd[1535]: time="2024-11-12T17:40:56.096158826Z" level=info msg="RemoveContainer for \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\" returns successfully" Nov 12 17:40:56.096438 kubelet[2708]: I1112 17:40:56.096418 2708 scope.go:117] "RemoveContainer" containerID="6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4" Nov 12 17:40:56.097242 containerd[1535]: time="2024-11-12T17:40:56.097214788Z" level=info msg="RemoveContainer for \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\"" Nov 12 17:40:56.099532 containerd[1535]: time="2024-11-12T17:40:56.099498151Z" level=info msg="RemoveContainer for \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\" returns successfully" Nov 12 17:40:56.099695 kubelet[2708]: I1112 17:40:56.099667 2708 scope.go:117] "RemoveContainer" containerID="e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423" Nov 12 17:40:56.100503 containerd[1535]: time="2024-11-12T17:40:56.100482952Z" level=info msg="RemoveContainer for \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\"" Nov 12 17:40:56.102972 containerd[1535]: time="2024-11-12T17:40:56.102934676Z" level=info msg="RemoveContainer for \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\" returns successfully" Nov 12 17:40:56.103161 kubelet[2708]: I1112 17:40:56.103128 2708 scope.go:117] "RemoveContainer" containerID="5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9" Nov 12 17:40:56.104039 containerd[1535]: time="2024-11-12T17:40:56.104004117Z" level=info msg="RemoveContainer for \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\"" Nov 12 17:40:56.106279 containerd[1535]: time="2024-11-12T17:40:56.106248641Z" level=info msg="RemoveContainer for \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\" returns successfully" Nov 12 17:40:56.106431 kubelet[2708]: I1112 17:40:56.106410 2708 scope.go:117] "RemoveContainer" containerID="67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593" Nov 12 17:40:56.106763 containerd[1535]: time="2024-11-12T17:40:56.106721721Z" level=error msg="ContainerStatus for \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\": not found" Nov 12 17:40:56.115697 kubelet[2708]: E1112 17:40:56.115653 2708 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\": not found" containerID="67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593" Nov 12 17:40:56.118813 kubelet[2708]: I1112 17:40:56.118784 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593"} err="failed to get container status \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\": rpc error: code = NotFound desc = an error occurred when try to find container \"67ef917e1f17b5b10cc00d86715520218d7c03df703fcde5b123d0538a5f3593\": not found" Nov 12 17:40:56.118880 kubelet[2708]: I1112 17:40:56.118823 2708 scope.go:117] "RemoveContainer" containerID="c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a" Nov 12 17:40:56.119150 containerd[1535]: time="2024-11-12T17:40:56.119066179Z" level=error msg="ContainerStatus for \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\": not found" Nov 12 17:40:56.119205 kubelet[2708]: E1112 17:40:56.119195 2708 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\": not found" containerID="c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a" Nov 12 17:40:56.119255 kubelet[2708]: I1112 17:40:56.119219 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a"} err="failed to get container status \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6f7390e13444e944ac28adced4c0951130850fbc65f0ea87249fd45ace1232a\": not found" Nov 12 17:40:56.119255 kubelet[2708]: I1112 17:40:56.119228 2708 scope.go:117] "RemoveContainer" containerID="6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4" Nov 12 17:40:56.119411 containerd[1535]: time="2024-11-12T17:40:56.119374380Z" level=error msg="ContainerStatus for \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\": not found" Nov 12 17:40:56.119522 kubelet[2708]: E1112 17:40:56.119505 2708 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\": not found" containerID="6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4" Nov 12 17:40:56.119559 kubelet[2708]: I1112 17:40:56.119537 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4"} err="failed to get container status \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d81be8a76c855ab81998eb30db85e4d7fbdd8731ab0c5f394ab9a6a67a5b1e4\": not found" Nov 12 17:40:56.119559 kubelet[2708]: I1112 17:40:56.119549 2708 scope.go:117] "RemoveContainer" containerID="e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423" Nov 12 17:40:56.119764 containerd[1535]: time="2024-11-12T17:40:56.119701660Z" level=error msg="ContainerStatus for \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\": not found" Nov 12 17:40:56.119854 kubelet[2708]: E1112 17:40:56.119840 2708 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\": not found" containerID="e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423" Nov 12 17:40:56.119892 kubelet[2708]: I1112 17:40:56.119864 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423"} err="failed to get container status \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\": rpc error: code = NotFound desc = an error occurred when try to find container \"e334d7244ffa80ed3a01d86f1c7fb60a98412276a454a1c8a196dc55e4c9e423\": not found" Nov 12 17:40:56.119892 kubelet[2708]: I1112 17:40:56.119874 2708 scope.go:117] "RemoveContainer" containerID="5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9" Nov 12 17:40:56.120027 containerd[1535]: time="2024-11-12T17:40:56.120000980Z" level=error msg="ContainerStatus for \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\": not found" Nov 12 17:40:56.120105 kubelet[2708]: E1112 17:40:56.120090 2708 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\": not found" containerID="5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9" Nov 12 17:40:56.120148 kubelet[2708]: I1112 17:40:56.120120 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9"} err="failed to get container status \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a517554a77e4f868726c2c5bdf06adb2906be2c913bde50456f1322013157b9\": not found" Nov 12 17:40:56.120148 kubelet[2708]: I1112 17:40:56.120130 2708 scope.go:117] "RemoveContainer" containerID="1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902" Nov 12 17:40:56.121096 containerd[1535]: time="2024-11-12T17:40:56.121051542Z" level=info msg="RemoveContainer for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\"" Nov 12 17:40:56.129531 containerd[1535]: time="2024-11-12T17:40:56.129491434Z" level=info msg="RemoveContainer for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" returns successfully" Nov 12 17:40:56.129750 kubelet[2708]: I1112 17:40:56.129726 2708 scope.go:117] "RemoveContainer" containerID="1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902" Nov 12 17:40:56.130063 kubelet[2708]: E1112 17:40:56.130047 2708 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\": not found" containerID="1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902" Nov 12 17:40:56.130104 containerd[1535]: time="2024-11-12T17:40:56.129927955Z" level=error msg="ContainerStatus for \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\": not found" Nov 12 17:40:56.130138 kubelet[2708]: I1112 17:40:56.130076 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902"} err="failed to get container status \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\": rpc error: code = NotFound desc = an error occurred when try to find container \"1459c6c93d79c90e78be666c0e9426e9272fed5d1a5fbc15a6663c668bdc9902\": not found" Nov 12 17:40:56.144314 kubelet[2708]: I1112 17:40:56.144288 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-xtables-lock\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.144440 kubelet[2708]: I1112 17:40:56.144424 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-bpf-maps\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.144497 kubelet[2708]: I1112 17:40:56.144452 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hostproc\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.150505 kubelet[2708]: I1112 17:40:56.150463 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.150563 kubelet[2708]: I1112 17:40:56.150460 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.151751 kubelet[2708]: I1112 17:40:56.151697 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hostproc" (OuterVolumeSpecName: "hostproc") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.153685 kubelet[2708]: I1112 17:40:56.153654 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v6xs\" (UniqueName: \"kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-kube-api-access-8v6xs\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153731 kubelet[2708]: I1112 17:40:56.153688 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-run\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153731 kubelet[2708]: I1112 17:40:56.153716 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-lib-modules\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153782 kubelet[2708]: I1112 17:40:56.153739 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hubble-tls\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153782 kubelet[2708]: I1112 17:40:56.153762 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-etc-cni-netd\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153782 kubelet[2708]: I1112 17:40:56.153780 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-kernel\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153851 kubelet[2708]: I1112 17:40:56.153799 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbgl7\" (UniqueName: \"kubernetes.io/projected/c0ba9503-0caa-4204-ba49-df00b3a0d32f-kube-api-access-dbgl7\") pod \"c0ba9503-0caa-4204-ba49-df00b3a0d32f\" (UID: \"c0ba9503-0caa-4204-ba49-df00b3a0d32f\") " Nov 12 17:40:56.153851 kubelet[2708]: I1112 17:40:56.153821 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-net\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153851 kubelet[2708]: I1112 17:40:56.153839 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-cgroup\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153913 kubelet[2708]: I1112 17:40:56.153860 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fa23bb9-c87a-4caf-83d4-5b77757f356e-clustermesh-secrets\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153913 kubelet[2708]: I1112 17:40:56.153882 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-config-path\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153913 kubelet[2708]: I1112 17:40:56.153900 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cni-path\") pod \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\" (UID: \"2fa23bb9-c87a-4caf-83d4-5b77757f356e\") " Nov 12 17:40:56.153978 kubelet[2708]: I1112 17:40:56.153919 2708 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ba9503-0caa-4204-ba49-df00b3a0d32f-cilium-config-path\") pod \"c0ba9503-0caa-4204-ba49-df00b3a0d32f\" (UID: \"c0ba9503-0caa-4204-ba49-df00b3a0d32f\") " Nov 12 17:40:56.153978 kubelet[2708]: I1112 17:40:56.153958 2708 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.153978 kubelet[2708]: I1112 17:40:56.153968 2708 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.153978 kubelet[2708]: I1112 17:40:56.153977 2708 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.157899 kubelet[2708]: I1112 17:40:56.157581 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ba9503-0caa-4204-ba49-df00b3a0d32f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0ba9503-0caa-4204-ba49-df00b3a0d32f" (UID: "c0ba9503-0caa-4204-ba49-df00b3a0d32f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 17:40:56.158388 kubelet[2708]: I1112 17:40:56.158353 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa23bb9-c87a-4caf-83d4-5b77757f356e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 17:40:56.158434 kubelet[2708]: I1112 17:40:56.158413 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.158457 kubelet[2708]: I1112 17:40:56.158436 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.159497 kubelet[2708]: I1112 17:40:56.159463 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ba9503-0caa-4204-ba49-df00b3a0d32f-kube-api-access-dbgl7" (OuterVolumeSpecName: "kube-api-access-dbgl7") pod "c0ba9503-0caa-4204-ba49-df00b3a0d32f" (UID: "c0ba9503-0caa-4204-ba49-df00b3a0d32f"). InnerVolumeSpecName "kube-api-access-dbgl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 17:40:56.159543 kubelet[2708]: I1112 17:40:56.159522 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.159579 kubelet[2708]: I1112 17:40:56.159543 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.159579 kubelet[2708]: I1112 17:40:56.159564 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.159623 kubelet[2708]: I1112 17:40:56.159581 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.159623 kubelet[2708]: I1112 17:40:56.159599 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cni-path" (OuterVolumeSpecName: "cni-path") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 17:40:56.159823 kubelet[2708]: I1112 17:40:56.159796 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-kube-api-access-8v6xs" (OuterVolumeSpecName: "kube-api-access-8v6xs") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "kube-api-access-8v6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 17:40:56.160014 kubelet[2708]: I1112 17:40:56.159991 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 17:40:56.160532 kubelet[2708]: I1112 17:40:56.160486 2708 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2fa23bb9-c87a-4caf-83d4-5b77757f356e" (UID: "2fa23bb9-c87a-4caf-83d4-5b77757f356e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 17:40:56.254937 kubelet[2708]: I1112 17:40:56.254886 2708 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8v6xs\" (UniqueName: \"kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-kube-api-access-8v6xs\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.254937 kubelet[2708]: I1112 17:40:56.254921 2708 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.254937 kubelet[2708]: I1112 17:40:56.254938 2708 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fa23bb9-c87a-4caf-83d4-5b77757f356e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.254937 kubelet[2708]: I1112 17:40:56.254949 2708 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.254961 2708 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.254971 2708 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dbgl7\" (UniqueName: \"kubernetes.io/projected/c0ba9503-0caa-4204-ba49-df00b3a0d32f-kube-api-access-dbgl7\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.254980 2708 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.254989 2708 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.254998 2708 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fa23bb9-c87a-4caf-83d4-5b77757f356e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.255008 2708 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.255017 2708 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255140 kubelet[2708]: I1112 17:40:56.255026 2708 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fa23bb9-c87a-4caf-83d4-5b77757f356e-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.255299 kubelet[2708]: I1112 17:40:56.255035 2708 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ba9503-0caa-4204-ba49-df00b3a0d32f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 17:40:56.840611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f-rootfs.mount: Deactivated successfully. Nov 12 17:40:56.840785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d9fa8ecc881617269ccb6e037e863944cc6dd241ac470ee62e3f6acf5bfbc9f-shm.mount: Deactivated successfully. Nov 12 17:40:56.840886 systemd[1]: var-lib-kubelet-pods-2fa23bb9\x2dc87a\x2d4caf\x2d83d4\x2d5b77757f356e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8v6xs.mount: Deactivated successfully. Nov 12 17:40:56.840964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad1ea382f75ae9ae33f597493fd57ec2588140654da1354a0f27a8c0c77a90a5-rootfs.mount: Deactivated successfully. Nov 12 17:40:56.841036 systemd[1]: var-lib-kubelet-pods-c0ba9503\x2d0caa\x2d4204\x2dba49\x2ddf00b3a0d32f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddbgl7.mount: Deactivated successfully. Nov 12 17:40:56.841116 systemd[1]: var-lib-kubelet-pods-2fa23bb9\x2dc87a\x2d4caf\x2d83d4\x2d5b77757f356e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 17:40:56.841195 systemd[1]: var-lib-kubelet-pods-2fa23bb9\x2dc87a\x2d4caf\x2d83d4\x2d5b77757f356e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 17:40:57.681264 sshd[4347]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:57.687940 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:41744.service - OpenSSH per-connection server daemon (10.0.0.1:41744). Nov 12 17:40:57.688340 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:41734.service: Deactivated successfully. Nov 12 17:40:57.691193 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 17:40:57.691777 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Nov 12 17:40:57.693235 systemd-logind[1515]: Removed session 23. Nov 12 17:40:57.725539 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 41744 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:57.726868 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:57.730701 systemd-logind[1515]: New session 24 of user core. Nov 12 17:40:57.736946 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 17:40:57.886036 kubelet[2708]: I1112 17:40:57.886004 2708 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" path="/var/lib/kubelet/pods/2fa23bb9-c87a-4caf-83d4-5b77757f356e/volumes" Nov 12 17:40:57.887710 kubelet[2708]: I1112 17:40:57.886555 2708 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c0ba9503-0caa-4204-ba49-df00b3a0d32f" path="/var/lib/kubelet/pods/c0ba9503-0caa-4204-ba49-df00b3a0d32f/volumes" Nov 12 17:40:57.947318 kubelet[2708]: E1112 17:40:57.947293 2708 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 17:40:58.885796 sshd[4514]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:58.899122 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:41760.service - OpenSSH per-connection server daemon (10.0.0.1:41760). Nov 12 17:40:58.900427 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:41744.service: Deactivated successfully. Nov 12 17:40:58.908103 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 17:40:58.914810 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. Nov 12 17:40:58.919833 systemd-logind[1515]: Removed session 24. Nov 12 17:40:58.926939 kubelet[2708]: I1112 17:40:58.926657 2708 topology_manager.go:215] "Topology Admit Handler" podUID="0807b290-1c26-4431-b608-2ec070240c03" podNamespace="kube-system" podName="cilium-q8sr9" Nov 12 17:40:58.926939 kubelet[2708]: E1112 17:40:58.926730 2708 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" containerName="mount-cgroup" Nov 12 17:40:58.926939 kubelet[2708]: E1112 17:40:58.926742 2708 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" containerName="apply-sysctl-overwrites" Nov 12 17:40:58.926939 kubelet[2708]: E1112 17:40:58.926749 2708 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" containerName="mount-bpf-fs" Nov 12 17:40:58.926939 kubelet[2708]: E1112 17:40:58.926755 2708 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" containerName="clean-cilium-state" Nov 12 17:40:58.926939 kubelet[2708]: E1112 17:40:58.926771 2708 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0ba9503-0caa-4204-ba49-df00b3a0d32f" containerName="cilium-operator" Nov 12 17:40:58.926939 kubelet[2708]: E1112 17:40:58.926778 2708 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" containerName="cilium-agent" Nov 12 17:40:58.926939 kubelet[2708]: I1112 17:40:58.926804 2708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ba9503-0caa-4204-ba49-df00b3a0d32f" containerName="cilium-operator" Nov 12 17:40:58.926939 kubelet[2708]: I1112 17:40:58.926811 2708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa23bb9-c87a-4caf-83d4-5b77757f356e" containerName="cilium-agent" Nov 12 17:40:58.949754 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 41760 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:58.950609 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:58.954405 systemd-logind[1515]: New session 25 of user core. Nov 12 17:40:58.966961 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 17:40:58.971666 kubelet[2708]: I1112 17:40:58.971614 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-hostproc\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.971666 kubelet[2708]: I1112 17:40:58.971658 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-cni-path\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.971858 kubelet[2708]: I1112 17:40:58.971721 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0807b290-1c26-4431-b608-2ec070240c03-cilium-config-path\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.971858 kubelet[2708]: I1112 17:40:58.971779 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-bpf-maps\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.971858 kubelet[2708]: I1112 17:40:58.971808 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-cilium-cgroup\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.971858 kubelet[2708]: I1112 17:40:58.971839 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hldn\" (UniqueName: \"kubernetes.io/projected/0807b290-1c26-4431-b608-2ec070240c03-kube-api-access-4hldn\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972022 kubelet[2708]: I1112 17:40:58.971873 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-xtables-lock\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972022 kubelet[2708]: I1112 17:40:58.971924 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-etc-cni-netd\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972022 kubelet[2708]: I1112 17:40:58.971953 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-host-proc-sys-kernel\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972022 kubelet[2708]: I1112 17:40:58.971984 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-lib-modules\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972022 kubelet[2708]: I1112 17:40:58.972002 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0807b290-1c26-4431-b608-2ec070240c03-hubble-tls\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972022 kubelet[2708]: I1112 17:40:58.972022 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-host-proc-sys-net\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972142 kubelet[2708]: I1112 17:40:58.972041 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0807b290-1c26-4431-b608-2ec070240c03-cilium-run\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972142 kubelet[2708]: I1112 17:40:58.972060 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0807b290-1c26-4431-b608-2ec070240c03-clustermesh-secrets\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:58.972142 kubelet[2708]: I1112 17:40:58.972080 2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0807b290-1c26-4431-b608-2ec070240c03-cilium-ipsec-secrets\") pod \"cilium-q8sr9\" (UID: \"0807b290-1c26-4431-b608-2ec070240c03\") " pod="kube-system/cilium-q8sr9" Nov 12 17:40:59.019029 sshd[4528]: pam_unix(sshd:session): session closed for user core Nov 12 17:40:59.030950 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:41770.service - OpenSSH per-connection server daemon (10.0.0.1:41770). Nov 12 17:40:59.031309 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:41760.service: Deactivated successfully. Nov 12 17:40:59.033658 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. Nov 12 17:40:59.033765 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 17:40:59.035345 systemd-logind[1515]: Removed session 25. Nov 12 17:40:59.066928 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 41770 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:40:59.068156 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:40:59.072212 systemd-logind[1515]: New session 26 of user core. Nov 12 17:40:59.078988 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 17:40:59.230274 kubelet[2708]: E1112 17:40:59.230206 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:59.240248 containerd[1535]: time="2024-11-12T17:40:59.240182125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8sr9,Uid:0807b290-1c26-4431-b608-2ec070240c03,Namespace:kube-system,Attempt:0,}" Nov 12 17:40:59.259619 containerd[1535]: time="2024-11-12T17:40:59.259516511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:40:59.259619 containerd[1535]: time="2024-11-12T17:40:59.259575471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:40:59.259619 containerd[1535]: time="2024-11-12T17:40:59.259591911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:40:59.260217 containerd[1535]: time="2024-11-12T17:40:59.260160832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:40:59.291760 containerd[1535]: time="2024-11-12T17:40:59.291721794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8sr9,Uid:0807b290-1c26-4431-b608-2ec070240c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\"" Nov 12 17:40:59.293017 kubelet[2708]: E1112 17:40:59.292994 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:40:59.303819 containerd[1535]: time="2024-11-12T17:40:59.303771411Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 17:40:59.318522 containerd[1535]: time="2024-11-12T17:40:59.318466591Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e1133d8a585867309d5747ccda8cc5a312a1d5c5e6a7d5b88c38885cdb2b06f\"" Nov 12 17:40:59.319087 containerd[1535]: time="2024-11-12T17:40:59.319025351Z" level=info msg="StartContainer for \"3e1133d8a585867309d5747ccda8cc5a312a1d5c5e6a7d5b88c38885cdb2b06f\"" Nov 12 17:40:59.368593 containerd[1535]: time="2024-11-12T17:40:59.368554698Z" level=info msg="StartContainer for \"3e1133d8a585867309d5747ccda8cc5a312a1d5c5e6a7d5b88c38885cdb2b06f\" returns successfully" Nov 12 17:40:59.401625 containerd[1535]: time="2024-11-12T17:40:59.401414343Z" level=info msg="shim disconnected" id=3e1133d8a585867309d5747ccda8cc5a312a1d5c5e6a7d5b88c38885cdb2b06f namespace=k8s.io Nov 12 17:40:59.401625 containerd[1535]: time="2024-11-12T17:40:59.401468583Z" level=warning msg="cleaning up after shim disconnected" id=3e1133d8a585867309d5747ccda8cc5a312a1d5c5e6a7d5b88c38885cdb2b06f namespace=k8s.io Nov 12 17:40:59.401625 containerd[1535]: time="2024-11-12T17:40:59.401479143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:41:00.092386 kubelet[2708]: E1112 17:41:00.092342 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:00.098743 containerd[1535]: time="2024-11-12T17:41:00.098169403Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 17:41:00.107012 containerd[1535]: time="2024-11-12T17:41:00.106962975Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5d891dda956acebcd30458bda716bffabd390d10d7404163edac2d76d08a7c1\"" Nov 12 17:41:00.108739 containerd[1535]: time="2024-11-12T17:41:00.108592097Z" level=info msg="StartContainer for \"f5d891dda956acebcd30458bda716bffabd390d10d7404163edac2d76d08a7c1\"" Nov 12 17:41:00.148821 containerd[1535]: time="2024-11-12T17:41:00.148778431Z" level=info msg="StartContainer for \"f5d891dda956acebcd30458bda716bffabd390d10d7404163edac2d76d08a7c1\" returns successfully" Nov 12 17:41:00.166674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5d891dda956acebcd30458bda716bffabd390d10d7404163edac2d76d08a7c1-rootfs.mount: Deactivated successfully. Nov 12 17:41:00.169798 containerd[1535]: time="2024-11-12T17:41:00.169745418Z" level=info msg="shim disconnected" id=f5d891dda956acebcd30458bda716bffabd390d10d7404163edac2d76d08a7c1 namespace=k8s.io Nov 12 17:41:00.169798 containerd[1535]: time="2024-11-12T17:41:00.169798099Z" level=warning msg="cleaning up after shim disconnected" id=f5d891dda956acebcd30458bda716bffabd390d10d7404163edac2d76d08a7c1 namespace=k8s.io Nov 12 17:41:00.169914 containerd[1535]: time="2024-11-12T17:41:00.169819739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:41:00.249067 kubelet[2708]: I1112 17:41:00.249040 2708 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T17:41:00Z","lastTransitionTime":"2024-11-12T17:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 17:41:01.094977 kubelet[2708]: E1112 17:41:01.094801 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:01.098371 containerd[1535]: time="2024-11-12T17:41:01.098208728Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 17:41:01.110141 containerd[1535]: time="2024-11-12T17:41:01.110094343Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24f263fca698a27beed07819b602f8968e17b4eff05ee18819a5809a2d370f0a\"" Nov 12 17:41:01.111195 containerd[1535]: time="2024-11-12T17:41:01.110629344Z" level=info msg="StartContainer for \"24f263fca698a27beed07819b602f8968e17b4eff05ee18819a5809a2d370f0a\"" Nov 12 17:41:01.164959 containerd[1535]: time="2024-11-12T17:41:01.164913175Z" level=info msg="StartContainer for \"24f263fca698a27beed07819b602f8968e17b4eff05ee18819a5809a2d370f0a\" returns successfully" Nov 12 17:41:01.180936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f263fca698a27beed07819b602f8968e17b4eff05ee18819a5809a2d370f0a-rootfs.mount: Deactivated successfully. Nov 12 17:41:01.184891 containerd[1535]: time="2024-11-12T17:41:01.184842561Z" level=info msg="shim disconnected" id=24f263fca698a27beed07819b602f8968e17b4eff05ee18819a5809a2d370f0a namespace=k8s.io Nov 12 17:41:01.185108 containerd[1535]: time="2024-11-12T17:41:01.185033681Z" level=warning msg="cleaning up after shim disconnected" id=24f263fca698a27beed07819b602f8968e17b4eff05ee18819a5809a2d370f0a namespace=k8s.io Nov 12 17:41:01.185108 containerd[1535]: time="2024-11-12T17:41:01.185049761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:41:02.098488 kubelet[2708]: E1112 17:41:02.098273 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:02.100680 containerd[1535]: time="2024-11-12T17:41:02.100643390Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 17:41:02.111306 containerd[1535]: time="2024-11-12T17:41:02.111263683Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e2fde299c583a0dd51d0bedf8df3f8f8d119adf7a1e5afdd8cf1484fe769f8c8\"" Nov 12 17:41:02.112192 containerd[1535]: time="2024-11-12T17:41:02.112153764Z" level=info msg="StartContainer for \"e2fde299c583a0dd51d0bedf8df3f8f8d119adf7a1e5afdd8cf1484fe769f8c8\"" Nov 12 17:41:02.153876 containerd[1535]: time="2024-11-12T17:41:02.153797857Z" level=info msg="StartContainer for \"e2fde299c583a0dd51d0bedf8df3f8f8d119adf7a1e5afdd8cf1484fe769f8c8\" returns successfully" Nov 12 17:41:02.167363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2fde299c583a0dd51d0bedf8df3f8f8d119adf7a1e5afdd8cf1484fe769f8c8-rootfs.mount: Deactivated successfully. Nov 12 17:41:02.171365 containerd[1535]: time="2024-11-12T17:41:02.171290440Z" level=info msg="shim disconnected" id=e2fde299c583a0dd51d0bedf8df3f8f8d119adf7a1e5afdd8cf1484fe769f8c8 namespace=k8s.io Nov 12 17:41:02.171365 containerd[1535]: time="2024-11-12T17:41:02.171363760Z" level=warning msg="cleaning up after shim disconnected" id=e2fde299c583a0dd51d0bedf8df3f8f8d119adf7a1e5afdd8cf1484fe769f8c8 namespace=k8s.io Nov 12 17:41:02.171365 containerd[1535]: time="2024-11-12T17:41:02.171372360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:41:02.948948 kubelet[2708]: E1112 17:41:02.948907 2708 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 17:41:03.103160 kubelet[2708]: E1112 17:41:03.103116 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:03.105787 containerd[1535]: time="2024-11-12T17:41:03.105641309Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 17:41:03.118720 containerd[1535]: time="2024-11-12T17:41:03.118658885Z" level=info msg="CreateContainer within sandbox \"dc959da09be6efc02d5d3dc6319ba43c27a6e3af511ee2cee1533fa3f17b6f3b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0c8b062781612ee57c3a3d519899f7508d2eabe3fde2f761041c80a3bda53628\"" Nov 12 17:41:03.119566 containerd[1535]: time="2024-11-12T17:41:03.119432846Z" level=info msg="StartContainer for \"0c8b062781612ee57c3a3d519899f7508d2eabe3fde2f761041c80a3bda53628\"" Nov 12 17:41:03.164961 containerd[1535]: time="2024-11-12T17:41:03.164909983Z" level=info msg="StartContainer for \"0c8b062781612ee57c3a3d519899f7508d2eabe3fde2f761041c80a3bda53628\" returns successfully" Nov 12 17:41:03.447734 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 12 17:41:03.885235 kubelet[2708]: E1112 17:41:03.884804 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:04.108698 kubelet[2708]: E1112 17:41:04.108663 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:04.123151 kubelet[2708]: I1112 17:41:04.122974 2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q8sr9" podStartSLOduration=6.122938459 podStartE2EDuration="6.122938459s" podCreationTimestamp="2024-11-12 17:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:41:04.121830778 +0000 UTC m=+86.332794635" watchObservedRunningTime="2024-11-12 17:41:04.122938459 +0000 UTC m=+86.333902316" Nov 12 17:41:05.233075 kubelet[2708]: E1112 17:41:05.233033 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:06.272294 systemd-networkd[1234]: lxc_health: Link UP Nov 12 17:41:06.284342 systemd-networkd[1234]: lxc_health: Gained carrier Nov 12 17:41:07.232202 kubelet[2708]: E1112 17:41:07.232157 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:07.402758 systemd-networkd[1234]: lxc_health: Gained IPv6LL Nov 12 17:41:08.119544 kubelet[2708]: E1112 17:41:08.119502 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:09.121181 kubelet[2708]: E1112 17:41:09.121141 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:10.884934 kubelet[2708]: E1112 17:41:10.884850 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:41:11.844228 sshd[4537]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:11.847597 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:41770.service: Deactivated successfully. Nov 12 17:41:11.849753 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 17:41:11.850309 systemd-logind[1515]: Session 26 logged out. Waiting for processes to exit. Nov 12 17:41:11.851202 systemd-logind[1515]: Removed session 26.