Dec 13 13:58:58.716117 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:58:58.716136 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 13:58:58.716143 kernel: efi: EFI v2.70 by EDK II Dec 13 13:58:58.716149 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 13:58:58.716155 kernel: random: crng init done Dec 13 13:58:58.716160 kernel: ACPI: Early table checksum verification disabled Dec 13 13:58:58.716166 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 13:58:58.716173 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:58:58.716178 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716183 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716189 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716194 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716200 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716205 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716213 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716219 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716225 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:58:58.716230 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:58:58.716236 kernel: NUMA: Failed to initialise from firmware Dec 13 13:58:58.716242 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:58:58.716248 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Dec 13 13:58:58.716253 kernel: Zone ranges: Dec 13 13:58:58.716259 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:58:58.716266 kernel: DMA32 empty Dec 13 13:58:58.716271 kernel: Normal empty Dec 13 13:58:58.716277 kernel: Movable zone start for each node Dec 13 13:58:58.716283 kernel: Early memory node ranges Dec 13 13:58:58.716288 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 13:58:58.716294 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 13:58:58.716300 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 13:58:58.716305 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 13:58:58.716311 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 13:58:58.716317 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 13:58:58.716323 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 13:58:58.716328 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:58:58.716335 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:58:58.716341 kernel: psci: probing for conduit method from ACPI. Dec 13 13:58:58.716346 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:58:58.716352 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:58:58.716358 kernel: psci: Trusted OS migration not required Dec 13 13:58:58.716366 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:58:58.716372 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:58:58.716410 kernel: ACPI: SRAT not present Dec 13 13:58:58.716416 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 13:58:58.716422 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 13:58:58.716429 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:58:58.716435 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:58:58.716441 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:58:58.716447 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:58:58.716453 kernel: CPU features: detected: Spectre-v4 Dec 13 13:58:58.716459 kernel: CPU features: detected: Spectre-BHB Dec 13 13:58:58.716466 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:58:58.716472 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:58:58.716478 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:58:58.716484 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:58:58.716490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:58:58.716496 kernel: Policy zone: DMA Dec 13 13:58:58.716503 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 13:58:58.716509 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:58:58.716516 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:58:58.716522 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:58:58.716528 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:58:58.716535 kernel: Memory: 2457400K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114888K reserved, 0K cma-reserved) Dec 13 13:58:58.716542 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:58:58.716548 kernel: trace event string verifier disabled Dec 13 13:58:58.716554 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:58:58.716560 kernel: rcu: RCU event tracing is enabled. Dec 13 13:58:58.716566 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:58:58.716573 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:58:58.716579 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:58:58.716589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:58:58.716596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:58:58.716602 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:58:58.716609 kernel: GICv3: 256 SPIs implemented Dec 13 13:58:58.716618 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:58:58.716624 kernel: GICv3: Distributor has no Range Selector support Dec 13 13:58:58.716630 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:58:58.716637 kernel: GICv3: 16 PPIs implemented Dec 13 13:58:58.716643 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:58:58.716649 kernel: ACPI: SRAT not present Dec 13 13:58:58.716655 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:58:58.716661 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:58:58.716667 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:58:58.716674 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 13:58:58.716680 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 13:58:58.716687 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:58:58.716694 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:58:58.716700 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:58:58.716706 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:58:58.716712 kernel: arm-pv: using stolen time PV Dec 13 13:58:58.716719 kernel: Console: colour dummy device 80x25 Dec 13 13:58:58.716725 kernel: ACPI: Core revision 20210730 Dec 13 13:58:58.716731 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:58:58.716738 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:58:58.716744 kernel: LSM: Security Framework initializing Dec 13 13:58:58.716751 kernel: SELinux: Initializing. Dec 13 13:58:58.716758 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:58:58.716764 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:58:58.716770 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:58:58.716776 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:58:58.716783 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:58:58.716789 kernel: Remapping and enabling EFI services. Dec 13 13:58:58.716795 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:58:58.716801 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:58:58.716809 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:58:58.716816 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 13:58:58.716823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:58:58.716829 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:58:58.716836 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:58:58.716842 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:58:58.716849 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 13:58:58.716855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:58:58.716862 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:58:58.716868 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:58:58.716875 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:58:58.716882 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 13:58:58.716888 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:58:58.716895 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:58:58.716905 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:58:58.716913 kernel: SMP: Total of 4 processors activated. Dec 13 13:58:58.716920 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:58:58.716927 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:58:58.716933 kernel: CPU features: detected: Common not Private translations Dec 13 13:58:58.716940 kernel: CPU features: detected: CRC32 instructions Dec 13 13:58:58.716947 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:58:58.716954 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:58:58.716962 kernel: CPU features: detected: Privileged Access Never Dec 13 13:58:58.716974 kernel: CPU features: detected: RAS Extension Support Dec 13 13:58:58.716981 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:58:58.716988 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:58:58.716994 kernel: alternatives: patching kernel code Dec 13 13:58:58.717002 kernel: devtmpfs: initialized Dec 13 13:58:58.717009 kernel: KASLR enabled Dec 13 13:58:58.717016 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:58:58.717023 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:58:58.717029 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:58:58.717036 kernel: SMBIOS 3.0.0 present. Dec 13 13:58:58.717043 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 13:58:58.717049 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:58:58.717056 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:58:58.717064 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:58:58.717071 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:58:58.717078 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:58:58.717084 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Dec 13 13:58:58.717091 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:58:58.717097 kernel: cpuidle: using governor menu Dec 13 13:58:58.717104 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:58:58.717111 kernel: ASID allocator initialised with 32768 entries Dec 13 13:58:58.717117 kernel: ACPI: bus type PCI registered Dec 13 13:58:58.717125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:58:58.717132 kernel: Serial: AMBA PL011 UART driver Dec 13 13:58:58.717139 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:58:58.717145 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:58:58.717152 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:58:58.717159 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:58:58.717166 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:58:58.717172 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:58:58.717179 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:58:58.717187 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:58:58.717194 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:58:58.717200 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:58:58.717207 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 13:58:58.717214 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 13:58:58.717220 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 13:58:58.717227 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:58:58.717234 kernel: ACPI: Interpreter enabled Dec 13 13:58:58.717241 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:58:58.717249 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:58:58.717255 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:58:58.717262 kernel: printk: console [ttyAMA0] enabled Dec 13 13:58:58.717269 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:58:58.717400 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:58:58.717468 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:58:58.717528 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:58:58.717589 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:58:58.717648 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:58:58.717657 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:58:58.717664 kernel: PCI host bridge to bus 0000:00 Dec 13 13:58:58.717735 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:58:58.717790 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:58:58.717844 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:58:58.717897 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:58:58.717980 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:58:58.718055 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:58:58.718118 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:58:58.718179 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:58:58.718240 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:58:58.718302 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:58:58.718401 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:58:58.718469 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:58:58.718524 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:58:58.718599 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:58:58.718657 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:58:58.718666 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:58:58.718673 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:58:58.718679 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:58:58.718689 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:58:58.718695 kernel: iommu: Default domain type: Translated Dec 13 13:58:58.718702 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:58:58.718709 kernel: vgaarb: loaded Dec 13 13:58:58.718715 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:58:58.718722 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:58:58.718729 kernel: PTP clock support registered Dec 13 13:58:58.718735 kernel: Registered efivars operations Dec 13 13:58:58.718742 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:58:58.718750 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:58:58.718757 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:58:58.718763 kernel: pnp: PnP ACPI init Dec 13 13:58:58.718826 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:58:58.718836 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:58:58.718842 kernel: NET: Registered PF_INET protocol family Dec 13 13:58:58.718849 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:58:58.718856 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:58:58.718864 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:58:58.718871 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:58:58.718878 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 13:58:58.718885 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:58:58.718891 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:58:58.718898 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:58:58.718904 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:58:58.718911 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:58:58.718918 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:58:58.718926 kernel: kvm [1]: HYP mode not available Dec 13 13:58:58.718932 kernel: Initialise system trusted keyrings Dec 13 13:58:58.718939 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:58:58.718945 kernel: Key type asymmetric registered Dec 13 13:58:58.718952 kernel: Asymmetric key parser 'x509' registered Dec 13 13:58:58.718958 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 13:58:58.718965 kernel: io scheduler mq-deadline registered Dec 13 13:58:58.718978 kernel: io scheduler kyber registered Dec 13 13:58:58.718985 kernel: io scheduler bfq registered Dec 13 13:58:58.718993 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:58:58.719000 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:58:58.719007 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:58:58.719072 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:58:58.719081 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:58:58.719088 kernel: thunder_xcv, ver 1.0 Dec 13 13:58:58.719094 kernel: thunder_bgx, ver 1.0 Dec 13 13:58:58.719101 kernel: nicpf, ver 1.0 Dec 13 13:58:58.719108 kernel: nicvf, ver 1.0 Dec 13 13:58:58.719179 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:58:58.719234 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:58:58 UTC (1734098338) Dec 13 13:58:58.719243 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:58:58.719250 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:58:58.719256 kernel: Segment Routing with IPv6 Dec 13 13:58:58.719263 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:58:58.719270 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:58:58.719276 kernel: Key type dns_resolver registered Dec 13 13:58:58.719284 kernel: registered taskstats version 1 Dec 13 13:58:58.719291 kernel: Loading compiled-in X.509 certificates Dec 13 13:58:58.719298 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 13:58:58.719304 kernel: Key type .fscrypt registered Dec 13 13:58:58.719310 kernel: Key type fscrypt-provisioning registered Dec 13 13:58:58.719317 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:58:58.719324 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:58:58.719330 kernel: ima: No architecture policies found Dec 13 13:58:58.719337 kernel: clk: Disabling unused clocks Dec 13 13:58:58.719345 kernel: Freeing unused kernel memory: 36416K Dec 13 13:58:58.719351 kernel: Run /init as init process Dec 13 13:58:58.719358 kernel: with arguments: Dec 13 13:58:58.719364 kernel: /init Dec 13 13:58:58.719370 kernel: with environment: Dec 13 13:58:58.719389 kernel: HOME=/ Dec 13 13:58:58.719396 kernel: TERM=linux Dec 13 13:58:58.719403 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:58:58.719411 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 13:58:58.719421 systemd[1]: Detected virtualization kvm. Dec 13 13:58:58.719429 systemd[1]: Detected architecture arm64. Dec 13 13:58:58.719436 systemd[1]: Running in initrd. Dec 13 13:58:58.719443 systemd[1]: No hostname configured, using default hostname. Dec 13 13:58:58.719449 systemd[1]: Hostname set to . Dec 13 13:58:58.719457 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:58:58.719464 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:58:58.719472 systemd[1]: Started systemd-ask-password-console.path. Dec 13 13:58:58.719478 systemd[1]: Reached target cryptsetup.target. Dec 13 13:58:58.719485 systemd[1]: Reached target paths.target. Dec 13 13:58:58.719492 systemd[1]: Reached target slices.target. Dec 13 13:58:58.719499 systemd[1]: Reached target swap.target. Dec 13 13:58:58.719506 systemd[1]: Reached target timers.target. Dec 13 13:58:58.719513 systemd[1]: Listening on iscsid.socket. Dec 13 13:58:58.719521 systemd[1]: Listening on iscsiuio.socket. Dec 13 13:58:58.719528 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 13:58:58.719535 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 13:58:58.719542 systemd[1]: Listening on systemd-journald.socket. Dec 13 13:58:58.719549 systemd[1]: Listening on systemd-networkd.socket. Dec 13 13:58:58.719556 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 13:58:58.719563 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 13:58:58.719570 systemd[1]: Reached target sockets.target. Dec 13 13:58:58.719577 systemd[1]: Starting kmod-static-nodes.service... Dec 13 13:58:58.719585 systemd[1]: Finished network-cleanup.service. Dec 13 13:58:58.719592 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:58:58.719599 systemd[1]: Starting systemd-journald.service... Dec 13 13:58:58.719606 systemd[1]: Starting systemd-modules-load.service... Dec 13 13:58:58.719613 systemd[1]: Starting systemd-resolved.service... Dec 13 13:58:58.719620 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 13:58:58.719627 systemd[1]: Finished kmod-static-nodes.service. Dec 13 13:58:58.719634 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:58:58.719641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 13:58:58.719651 systemd-journald[289]: Journal started Dec 13 13:58:58.719692 systemd-journald[289]: Runtime Journal (/run/log/journal/3b41f92285ae4f1ba91925fb30cb3694) is 6.0M, max 48.7M, 42.6M free. Dec 13 13:58:58.714739 systemd-modules-load[290]: Inserted module 'overlay' Dec 13 13:58:58.721890 systemd[1]: Started systemd-journald.service. Dec 13 13:58:58.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.722365 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 13:58:58.730142 kernel: audit: type=1130 audit(1734098338.721:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.730161 kernel: audit: type=1130 audit(1734098338.725:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.727916 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 13:58:58.737912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 13:58:58.738766 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:58:58.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.742404 kernel: audit: type=1130 audit(1734098338.739:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.742433 kernel: Bridge firewalling registered Dec 13 13:58:58.742860 systemd-modules-load[290]: Inserted module 'br_netfilter' Dec 13 13:58:58.743911 systemd-resolved[291]: Positive Trust Anchors: Dec 13 13:58:58.743923 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:58:58.743949 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 13:58:58.748097 systemd-resolved[291]: Defaulting to hostname 'linux'. Dec 13 13:58:58.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.755413 kernel: audit: type=1130 audit(1734098338.752:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.748862 systemd[1]: Started systemd-resolved.service. Dec 13 13:58:58.752517 systemd[1]: Reached target nss-lookup.target. Dec 13 13:58:58.757850 kernel: SCSI subsystem initialized Dec 13 13:58:58.757310 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 13:58:58.761443 kernel: audit: type=1130 audit(1734098338.758:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.761428 systemd[1]: Starting dracut-cmdline.service... Dec 13 13:58:58.764894 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:58:58.764923 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:58:58.766196 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 13:58:58.770017 systemd-modules-load[290]: Inserted module 'dm_multipath' Dec 13 13:58:58.770800 systemd[1]: Finished systemd-modules-load.service. Dec 13 13:58:58.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.774653 dracut-cmdline[306]: dracut-dracut-053 Dec 13 13:58:58.774653 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 13:58:58.779872 kernel: audit: type=1130 audit(1734098338.771:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.772363 systemd[1]: Starting systemd-sysctl.service... Dec 13 13:58:58.781403 systemd[1]: Finished systemd-sysctl.service. Dec 13 13:58:58.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.785398 kernel: audit: type=1130 audit(1734098338.781:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.833401 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:58:58.846397 kernel: iscsi: registered transport (tcp) Dec 13 13:58:58.861451 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:58:58.861487 kernel: QLogic iSCSI HBA Driver Dec 13 13:58:58.895182 systemd[1]: Finished dracut-cmdline.service. Dec 13 13:58:58.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.896824 systemd[1]: Starting dracut-pre-udev.service... Dec 13 13:58:58.899978 kernel: audit: type=1130 audit(1734098338.895:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:58.940405 kernel: raid6: neonx8 gen() 13719 MB/s Dec 13 13:58:58.957389 kernel: raid6: neonx8 xor() 10637 MB/s Dec 13 13:58:58.974402 kernel: raid6: neonx4 gen() 13494 MB/s Dec 13 13:58:58.991401 kernel: raid6: neonx4 xor() 11159 MB/s Dec 13 13:58:59.008402 kernel: raid6: neonx2 gen() 12999 MB/s Dec 13 13:58:59.025395 kernel: raid6: neonx2 xor() 10363 MB/s Dec 13 13:58:59.042397 kernel: raid6: neonx1 gen() 10442 MB/s Dec 13 13:58:59.059402 kernel: raid6: neonx1 xor() 8697 MB/s Dec 13 13:58:59.076401 kernel: raid6: int64x8 gen() 6196 MB/s Dec 13 13:58:59.093400 kernel: raid6: int64x8 xor() 3480 MB/s Dec 13 13:58:59.110399 kernel: raid6: int64x4 gen() 6997 MB/s Dec 13 13:58:59.127462 kernel: raid6: int64x4 xor() 3840 MB/s Dec 13 13:58:59.144406 kernel: raid6: int64x2 gen() 6070 MB/s Dec 13 13:58:59.161393 kernel: raid6: int64x2 xor() 3305 MB/s Dec 13 13:58:59.178396 kernel: raid6: int64x1 gen() 5020 MB/s Dec 13 13:58:59.195439 kernel: raid6: int64x1 xor() 2635 MB/s Dec 13 13:58:59.195451 kernel: raid6: using algorithm neonx8 gen() 13719 MB/s Dec 13 13:58:59.195460 kernel: raid6: .... xor() 10637 MB/s, rmw enabled Dec 13 13:58:59.196480 kernel: raid6: using neon recovery algorithm Dec 13 13:58:59.206396 kernel: xor: measuring software checksum speed Dec 13 13:58:59.207597 kernel: 8regs : 15044 MB/sec Dec 13 13:58:59.207614 kernel: 32regs : 19860 MB/sec Dec 13 13:58:59.208806 kernel: arm64_neon : 27738 MB/sec Dec 13 13:58:59.208827 kernel: xor: using function: arm64_neon (27738 MB/sec) Dec 13 13:58:59.262408 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 13:58:59.272605 systemd[1]: Finished dracut-pre-udev.service. Dec 13 13:58:59.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:59.275000 audit: BPF prog-id=7 op=LOAD Dec 13 13:58:59.275000 audit: BPF prog-id=8 op=LOAD Dec 13 13:58:59.276412 kernel: audit: type=1130 audit(1734098339.272:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:59.276672 systemd[1]: Starting systemd-udevd.service... Dec 13 13:58:59.295491 systemd-udevd[489]: Using default interface naming scheme 'v252'. Dec 13 13:58:59.298757 systemd[1]: Started systemd-udevd.service. Dec 13 13:58:59.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:59.300316 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 13:58:59.311916 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Dec 13 13:58:59.338020 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 13:58:59.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:59.339618 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 13:58:59.372228 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 13:58:59.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:58:59.404034 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:58:59.409201 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:58:59.409216 kernel: GPT:9289727 != 19775487 Dec 13 13:58:59.409230 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:58:59.409238 kernel: GPT:9289727 != 19775487 Dec 13 13:58:59.409246 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:58:59.409255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:58:59.421922 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 13:58:59.425129 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 13:58:59.426220 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 13:58:59.430399 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (555) Dec 13 13:58:59.435317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 13:58:59.441934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 13:58:59.443573 systemd[1]: Starting disk-uuid.service... Dec 13 13:58:59.449602 disk-uuid[562]: Primary Header is updated. Dec 13 13:58:59.449602 disk-uuid[562]: Secondary Entries is updated. Dec 13 13:58:59.449602 disk-uuid[562]: Secondary Header is updated. Dec 13 13:58:59.453401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:59:00.462397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:59:00.462597 disk-uuid[563]: The operation has completed successfully. Dec 13 13:59:00.487792 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:59:00.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.487890 systemd[1]: Finished disk-uuid.service. Dec 13 13:59:00.494550 systemd[1]: Starting verity-setup.service... Dec 13 13:59:00.508395 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:59:00.531627 systemd[1]: Found device dev-mapper-usr.device. Dec 13 13:59:00.533198 systemd[1]: Mounting sysusr-usr.mount... Dec 13 13:59:00.534053 systemd[1]: Finished verity-setup.service. Dec 13 13:59:00.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.585166 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 13:59:00.583617 systemd[1]: Mounted sysusr-usr.mount. Dec 13 13:59:00.584488 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 13:59:00.587441 systemd[1]: Starting ignition-setup.service... Dec 13 13:59:00.590100 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 13:59:00.597889 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:59:00.597935 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:59:00.597945 kernel: BTRFS info (device vda6): has skinny extents Dec 13 13:59:00.609288 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:59:00.640922 systemd[1]: Finished ignition-setup.service. Dec 13 13:59:00.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.642596 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 13:59:00.677039 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 13:59:00.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.678000 audit: BPF prog-id=9 op=LOAD Dec 13 13:59:00.679440 systemd[1]: Starting systemd-networkd.service... Dec 13 13:59:00.711123 systemd-networkd[734]: lo: Link UP Dec 13 13:59:00.711136 systemd-networkd[734]: lo: Gained carrier Dec 13 13:59:00.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.711533 systemd-networkd[734]: Enumeration completed Dec 13 13:59:00.711675 systemd[1]: Started systemd-networkd.service. Dec 13 13:59:00.711725 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:59:00.713017 systemd[1]: Reached target network.target. Dec 13 13:59:00.713673 systemd-networkd[734]: eth0: Link UP Dec 13 13:59:00.713677 systemd-networkd[734]: eth0: Gained carrier Dec 13 13:59:00.715592 systemd[1]: Starting iscsiuio.service... Dec 13 13:59:00.728369 systemd[1]: Started iscsiuio.service. Dec 13 13:59:00.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.730025 systemd[1]: Starting iscsid.service... Dec 13 13:59:00.733780 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 13:59:00.733780 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 13:59:00.733780 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 13:59:00.733780 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 13:59:00.733780 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 13:59:00.733780 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 13:59:00.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.736215 systemd[1]: Started iscsid.service. Dec 13 13:59:00.737470 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:59:00.742721 systemd[1]: Starting dracut-initqueue.service... Dec 13 13:59:00.754982 systemd[1]: Finished dracut-initqueue.service. Dec 13 13:59:00.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.756122 systemd[1]: Reached target remote-fs-pre.target. Dec 13 13:59:00.757565 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 13:59:00.759209 systemd[1]: Reached target remote-fs.target. Dec 13 13:59:00.761667 systemd[1]: Starting dracut-pre-mount.service... Dec 13 13:59:00.766018 ignition[702]: Ignition 2.14.0 Dec 13 13:59:00.766027 ignition[702]: Stage: fetch-offline Dec 13 13:59:00.766065 ignition[702]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:00.766076 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:00.766204 ignition[702]: parsed url from cmdline: "" Dec 13 13:59:00.769428 systemd[1]: Finished dracut-pre-mount.service. Dec 13 13:59:00.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.766207 ignition[702]: no config URL provided Dec 13 13:59:00.766212 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:59:00.766219 ignition[702]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:59:00.766236 ignition[702]: op(1): [started] loading QEMU firmware config module Dec 13 13:59:00.766240 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:59:00.773823 ignition[702]: op(1): [finished] loading QEMU firmware config module Dec 13 13:59:00.812266 ignition[702]: parsing config with SHA512: 6e1d4a4d2c7d0d83e95cae401f45a0959b2350273dc583652a7b8b89a2f71b18aab4682950456418b8d6ca5ab2b7b051ed268054c0bb32a815757a0ca6c86154 Dec 13 13:59:00.823431 unknown[702]: fetched base config from "system" Dec 13 13:59:00.824002 ignition[702]: fetch-offline: fetch-offline passed Dec 13 13:59:00.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.823440 unknown[702]: fetched user config from "qemu" Dec 13 13:59:00.824062 ignition[702]: Ignition finished successfully Dec 13 13:59:00.824956 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 13:59:00.825903 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:59:00.826649 systemd[1]: Starting ignition-kargs.service... Dec 13 13:59:00.834477 ignition[760]: Ignition 2.14.0 Dec 13 13:59:00.834487 ignition[760]: Stage: kargs Dec 13 13:59:00.834572 ignition[760]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:00.834583 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:00.836725 systemd[1]: Finished ignition-kargs.service. Dec 13 13:59:00.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.835506 ignition[760]: kargs: kargs passed Dec 13 13:59:00.835546 ignition[760]: Ignition finished successfully Dec 13 13:59:00.839094 systemd[1]: Starting ignition-disks.service... Dec 13 13:59:00.845494 ignition[766]: Ignition 2.14.0 Dec 13 13:59:00.845504 ignition[766]: Stage: disks Dec 13 13:59:00.845587 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:00.847511 systemd[1]: Finished ignition-disks.service. Dec 13 13:59:00.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.845596 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:00.849079 systemd[1]: Reached target initrd-root-device.target. Dec 13 13:59:00.846446 ignition[766]: disks: disks passed Dec 13 13:59:00.850422 systemd[1]: Reached target local-fs-pre.target. Dec 13 13:59:00.846487 ignition[766]: Ignition finished successfully Dec 13 13:59:00.852087 systemd[1]: Reached target local-fs.target. Dec 13 13:59:00.853483 systemd[1]: Reached target sysinit.target. Dec 13 13:59:00.854649 systemd[1]: Reached target basic.target. Dec 13 13:59:00.857009 systemd[1]: Starting systemd-fsck-root.service... Dec 13 13:59:00.868956 systemd-fsck[774]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 13:59:00.873565 systemd[1]: Finished systemd-fsck-root.service. Dec 13 13:59:00.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.875130 systemd[1]: Mounting sysroot.mount... Dec 13 13:59:00.881394 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 13:59:00.881639 systemd[1]: Mounted sysroot.mount. Dec 13 13:59:00.882357 systemd[1]: Reached target initrd-root-fs.target. Dec 13 13:59:00.884847 systemd[1]: Mounting sysroot-usr.mount... Dec 13 13:59:00.885750 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 13:59:00.885783 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:59:00.885805 systemd[1]: Reached target ignition-diskful.target. Dec 13 13:59:00.887569 systemd[1]: Mounted sysroot-usr.mount. Dec 13 13:59:00.889393 systemd[1]: Starting initrd-setup-root.service... Dec 13 13:59:00.893848 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:59:00.899151 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:59:00.903720 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:59:00.909109 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:59:00.943587 systemd[1]: Finished initrd-setup-root.service. Dec 13 13:59:00.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.945230 systemd[1]: Starting ignition-mount.service... Dec 13 13:59:00.946540 systemd[1]: Starting sysroot-boot.service... Dec 13 13:59:00.950940 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 13:59:00.960548 ignition[826]: INFO : Ignition 2.14.0 Dec 13 13:59:00.960548 ignition[826]: INFO : Stage: mount Dec 13 13:59:00.962357 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:00.962357 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:00.962357 ignition[826]: INFO : mount: mount passed Dec 13 13:59:00.962357 ignition[826]: INFO : Ignition finished successfully Dec 13 13:59:00.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:00.962680 systemd[1]: Finished ignition-mount.service. Dec 13 13:59:00.973204 systemd[1]: Finished sysroot-boot.service. Dec 13 13:59:00.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:01.542325 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 13:59:01.548392 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) Dec 13 13:59:01.550973 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:59:01.551028 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:59:01.551038 kernel: BTRFS info (device vda6): has skinny extents Dec 13 13:59:01.553818 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 13:59:01.555434 systemd[1]: Starting ignition-files.service... Dec 13 13:59:01.569296 ignition[855]: INFO : Ignition 2.14.0 Dec 13 13:59:01.569296 ignition[855]: INFO : Stage: files Dec 13 13:59:01.571113 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:01.571113 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:01.571113 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:59:01.574503 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:59:01.574503 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:59:01.577730 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:59:01.579103 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:59:01.579103 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:59:01.578583 unknown[855]: wrote ssh authorized keys file for user: core Dec 13 13:59:01.583001 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 13:59:01.583001 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 13:59:01.583001 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:59:01.583001 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:59:01.644129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:59:01.845411 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:59:01.847468 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:59:01.847468 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 13:59:02.155903 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 13:59:02.189632 systemd-networkd[734]: eth0: Gained IPv6LL Dec 13 13:59:02.227647 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:59:02.227647 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:02.231124 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 13:59:02.446991 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 13:59:02.842694 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:02.842694 ignition[855]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:59:02.846622 ignition[855]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:59:02.892664 ignition[855]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:59:02.895068 ignition[855]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:59:02.895068 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:59:02.895068 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:59:02.895068 ignition[855]: INFO : files: files passed Dec 13 13:59:02.895068 ignition[855]: INFO : Ignition finished successfully Dec 13 13:59:02.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.896830 systemd[1]: Finished ignition-files.service. Dec 13 13:59:02.898394 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 13:59:02.899879 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 13:59:02.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.908183 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 13:59:02.900533 systemd[1]: Starting ignition-quench.service... Dec 13 13:59:02.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.911069 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:59:02.905301 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:59:02.905553 systemd[1]: Finished ignition-quench.service. Dec 13 13:59:02.908257 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 13:59:02.910480 systemd[1]: Reached target ignition-complete.target. Dec 13 13:59:02.912423 systemd[1]: Starting initrd-parse-etc.service... Dec 13 13:59:02.924561 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:59:02.924652 systemd[1]: Finished initrd-parse-etc.service. Dec 13 13:59:02.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.926354 systemd[1]: Reached target initrd-fs.target. Dec 13 13:59:02.927551 systemd[1]: Reached target initrd.target. Dec 13 13:59:02.928983 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 13:59:02.929690 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 13:59:02.940811 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 13:59:02.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.942350 systemd[1]: Starting initrd-cleanup.service... Dec 13 13:59:02.950481 systemd[1]: Stopped target nss-lookup.target. Dec 13 13:59:02.951364 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 13:59:02.952877 systemd[1]: Stopped target timers.target. Dec 13 13:59:02.954230 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:59:02.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.954350 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 13:59:02.955636 systemd[1]: Stopped target initrd.target. Dec 13 13:59:02.957048 systemd[1]: Stopped target basic.target. Dec 13 13:59:02.958349 systemd[1]: Stopped target ignition-complete.target. Dec 13 13:59:02.959717 systemd[1]: Stopped target ignition-diskful.target. Dec 13 13:59:02.961053 systemd[1]: Stopped target initrd-root-device.target. Dec 13 13:59:02.962535 systemd[1]: Stopped target remote-fs.target. Dec 13 13:59:02.963900 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 13:59:02.965355 systemd[1]: Stopped target sysinit.target. Dec 13 13:59:02.966685 systemd[1]: Stopped target local-fs.target. Dec 13 13:59:02.968022 systemd[1]: Stopped target local-fs-pre.target. Dec 13 13:59:02.969326 systemd[1]: Stopped target swap.target. Dec 13 13:59:02.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.970528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:59:02.970649 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 13:59:02.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.971947 systemd[1]: Stopped target cryptsetup.target. Dec 13 13:59:02.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.973085 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:59:02.973193 systemd[1]: Stopped dracut-initqueue.service. Dec 13 13:59:02.974644 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:59:02.974745 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 13:59:02.976075 systemd[1]: Stopped target paths.target. Dec 13 13:59:02.977234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:59:02.980429 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 13:59:02.981913 systemd[1]: Stopped target slices.target. Dec 13 13:59:02.983414 systemd[1]: Stopped target sockets.target. Dec 13 13:59:02.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.984771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:59:02.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.984886 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 13:59:02.990007 iscsid[744]: iscsid shutting down. Dec 13 13:59:02.986319 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:59:02.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.986430 systemd[1]: Stopped ignition-files.service. Dec 13 13:59:02.988447 systemd[1]: Stopping ignition-mount.service... Dec 13 13:59:02.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.997012 ignition[896]: INFO : Ignition 2.14.0 Dec 13 13:59:02.997012 ignition[896]: INFO : Stage: umount Dec 13 13:59:02.997012 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:02.997012 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:02.997012 ignition[896]: INFO : umount: umount passed Dec 13 13:59:02.997012 ignition[896]: INFO : Ignition finished successfully Dec 13 13:59:02.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.989292 systemd[1]: Stopping iscsid.service... Dec 13 13:59:03.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.990529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:59:03.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.990651 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 13:59:03.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.992676 systemd[1]: Stopping sysroot-boot.service... Dec 13 13:59:02.994646 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:59:02.994796 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 13:59:03.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.996607 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:59:03.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.996730 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 13:59:03.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:02.999228 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 13:59:02.999322 systemd[1]: Stopped iscsid.service. Dec 13 13:59:03.000910 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:59:03.000996 systemd[1]: Closed iscsid.socket. Dec 13 13:59:03.002123 systemd[1]: Stopping iscsiuio.service... Dec 13 13:59:03.005106 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:59:03.005566 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 13:59:03.005653 systemd[1]: Stopped iscsiuio.service. Dec 13 13:59:03.006929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:59:03.007021 systemd[1]: Finished initrd-cleanup.service. Dec 13 13:59:03.008218 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:59:03.008296 systemd[1]: Stopped ignition-mount.service. Dec 13 13:59:03.010463 systemd[1]: Stopped target network.target. Dec 13 13:59:03.011923 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:59:03.011972 systemd[1]: Closed iscsiuio.socket. Dec 13 13:59:03.013241 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:59:03.013285 systemd[1]: Stopped ignition-disks.service. Dec 13 13:59:03.014887 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:59:03.014929 systemd[1]: Stopped ignition-kargs.service. Dec 13 13:59:03.040139 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 13:59:03.040162 kernel: audit: type=1131 audit(1734098343.036:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.016342 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:59:03.016505 systemd[1]: Stopped ignition-setup.service. Dec 13 13:59:03.018063 systemd[1]: Stopping systemd-networkd.service... Dec 13 13:59:03.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.047428 kernel: audit: type=1131 audit(1734098343.043:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.019357 systemd[1]: Stopping systemd-resolved.service... Dec 13 13:59:03.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.033779 systemd-networkd[734]: eth0: DHCPv6 lease lost Dec 13 13:59:03.056390 kernel: audit: type=1131 audit(1734098343.047:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.056412 kernel: audit: type=1334 audit(1734098343.050:61): prog-id=9 op=UNLOAD Dec 13 13:59:03.056422 kernel: audit: type=1131 audit(1734098343.052:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.050000 audit: BPF prog-id=9 op=UNLOAD Dec 13 13:59:03.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.034906 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:59:03.035024 systemd[1]: Stopped systemd-networkd.service. Dec 13 13:59:03.036829 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:59:03.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.036858 systemd[1]: Closed systemd-networkd.socket. Dec 13 13:59:03.041744 systemd[1]: Stopping network-cleanup.service... Dec 13 13:59:03.067227 kernel: audit: type=1131 audit(1734098343.059:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.067249 kernel: audit: type=1131 audit(1734098343.063:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.042512 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:59:03.069136 kernel: audit: type=1334 audit(1734098343.067:65): prog-id=6 op=UNLOAD Dec 13 13:59:03.067000 audit: BPF prog-id=6 op=UNLOAD Dec 13 13:59:03.042581 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 13:59:03.072883 kernel: audit: type=1131 audit(1734098343.069:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.044193 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:59:03.044235 systemd[1]: Stopped systemd-sysctl.service. Dec 13 13:59:03.051223 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:59:03.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.051269 systemd[1]: Stopped systemd-modules-load.service. Dec 13 13:59:03.081257 kernel: audit: type=1131 audit(1734098343.076:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.053111 systemd[1]: Stopping systemd-udevd.service... Dec 13 13:59:03.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.057942 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 13:59:03.058503 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:59:03.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.058603 systemd[1]: Stopped systemd-resolved.service. Dec 13 13:59:03.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.063011 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:59:03.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.063111 systemd[1]: Stopped network-cleanup.service. Dec 13 13:59:03.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:03.067889 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:59:03.068015 systemd[1]: Stopped systemd-udevd.service. Dec 13 13:59:03.070410 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:59:03.070454 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 13:59:03.073683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:59:03.073717 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 13:59:03.075169 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:59:03.075217 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 13:59:03.076528 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:59:03.076570 systemd[1]: Stopped dracut-cmdline.service. Dec 13 13:59:03.100000 audit: BPF prog-id=5 op=UNLOAD Dec 13 13:59:03.100000 audit: BPF prog-id=4 op=UNLOAD Dec 13 13:59:03.100000 audit: BPF prog-id=3 op=UNLOAD Dec 13 13:59:03.080598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:59:03.100000 audit: BPF prog-id=8 op=UNLOAD Dec 13 13:59:03.100000 audit: BPF prog-id=7 op=UNLOAD Dec 13 13:59:03.080639 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 13:59:03.082744 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 13:59:03.083582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:59:03.083643 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 13:59:03.085670 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:59:03.085759 systemd[1]: Stopped sysroot-boot.service. Dec 13 13:59:03.087244 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:59:03.087286 systemd[1]: Stopped initrd-setup-root.service. Dec 13 13:59:03.088667 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:59:03.088746 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 13:59:03.090234 systemd[1]: Reached target initrd-switch-root.target. Dec 13 13:59:03.092539 systemd[1]: Starting initrd-switch-root.service... Dec 13 13:59:03.098393 systemd[1]: Switching root. Dec 13 13:59:03.119917 systemd-journald[289]: Journal stopped Dec 13 13:59:05.143807 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Dec 13 13:59:05.143859 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 13:59:05.143877 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 13:59:05.143887 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 13:59:05.143897 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:59:05.143909 kernel: SELinux: policy capability open_perms=1 Dec 13 13:59:05.143919 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:59:05.143929 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:59:05.143939 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:59:05.143959 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:59:05.143970 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:59:05.143982 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:59:05.143993 systemd[1]: Successfully loaded SELinux policy in 34.649ms. Dec 13 13:59:05.144012 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.745ms. Dec 13 13:59:05.144026 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 13:59:05.144037 systemd[1]: Detected virtualization kvm. Dec 13 13:59:05.144048 systemd[1]: Detected architecture arm64. Dec 13 13:59:05.144061 systemd[1]: Detected first boot. Dec 13 13:59:05.144073 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:59:05.144084 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 13:59:05.144095 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:59:05.144107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 13:59:05.144119 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 13:59:05.144131 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:59:05.144143 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:59:05.144159 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 13:59:05.144170 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 13:59:05.144181 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 13:59:05.144192 systemd[1]: Created slice system-getty.slice. Dec 13 13:59:05.144203 systemd[1]: Created slice system-modprobe.slice. Dec 13 13:59:05.144213 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 13:59:05.144225 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 13:59:05.144236 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 13:59:05.144248 systemd[1]: Created slice user.slice. Dec 13 13:59:05.144277 systemd[1]: Started systemd-ask-password-console.path. Dec 13 13:59:05.144289 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 13:59:05.144300 systemd[1]: Set up automount boot.automount. Dec 13 13:59:05.144310 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 13:59:05.144322 systemd[1]: Reached target integritysetup.target. Dec 13 13:59:05.144337 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 13:59:05.144348 systemd[1]: Reached target remote-fs.target. Dec 13 13:59:05.144360 systemd[1]: Reached target slices.target. Dec 13 13:59:05.144371 systemd[1]: Reached target swap.target. Dec 13 13:59:05.144396 systemd[1]: Reached target torcx.target. Dec 13 13:59:05.144407 systemd[1]: Reached target veritysetup.target. Dec 13 13:59:05.144419 systemd[1]: Listening on systemd-coredump.socket. Dec 13 13:59:05.144430 systemd[1]: Listening on systemd-initctl.socket. Dec 13 13:59:05.144442 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 13:59:05.144453 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 13:59:05.144464 systemd[1]: Listening on systemd-journald.socket. Dec 13 13:59:05.144474 systemd[1]: Listening on systemd-networkd.socket. Dec 13 13:59:05.144487 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 13:59:05.144498 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 13:59:05.144509 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 13:59:05.144520 systemd[1]: Mounting dev-hugepages.mount... Dec 13 13:59:05.144531 systemd[1]: Mounting dev-mqueue.mount... Dec 13 13:59:05.144541 systemd[1]: Mounting media.mount... Dec 13 13:59:05.144552 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 13:59:05.144563 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 13:59:05.144573 systemd[1]: Mounting tmp.mount... Dec 13 13:59:05.144585 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 13:59:05.144597 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 13:59:05.144607 systemd[1]: Starting kmod-static-nodes.service... Dec 13 13:59:05.144619 systemd[1]: Starting modprobe@configfs.service... Dec 13 13:59:05.144630 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 13:59:05.144640 systemd[1]: Starting modprobe@drm.service... Dec 13 13:59:05.144651 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 13:59:05.144662 systemd[1]: Starting modprobe@fuse.service... Dec 13 13:59:05.144673 systemd[1]: Starting modprobe@loop.service... Dec 13 13:59:05.144685 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:59:05.144696 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 13:59:05.144707 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 13:59:05.144717 systemd[1]: Starting systemd-journald.service... Dec 13 13:59:05.144728 systemd[1]: Starting systemd-modules-load.service... Dec 13 13:59:05.144738 kernel: fuse: init (API version 7.34) Dec 13 13:59:05.144748 systemd[1]: Starting systemd-network-generator.service... Dec 13 13:59:05.144758 systemd[1]: Starting systemd-remount-fs.service... Dec 13 13:59:05.144769 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 13:59:05.144781 systemd[1]: Mounted dev-hugepages.mount. Dec 13 13:59:05.144792 systemd[1]: Mounted dev-mqueue.mount. Dec 13 13:59:05.144802 systemd[1]: Mounted media.mount. Dec 13 13:59:05.144813 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 13:59:05.144824 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 13:59:05.144834 systemd[1]: Mounted tmp.mount. Dec 13 13:59:05.144845 systemd[1]: Finished kmod-static-nodes.service. Dec 13 13:59:05.144856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:59:05.144868 systemd-journald[1026]: Journal started Dec 13 13:59:05.144914 systemd-journald[1026]: Runtime Journal (/run/log/journal/3b41f92285ae4f1ba91925fb30cb3694) is 6.0M, max 48.7M, 42.6M free. Dec 13 13:59:05.058000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 13:59:05.058000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 13:59:05.141000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 13:59:05.141000 audit[1026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd4a53540 a2=4000 a3=1 items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 13:59:05.141000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 13:59:05.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.148482 systemd[1]: Finished modprobe@configfs.service. Dec 13 13:59:05.148528 systemd[1]: Started systemd-journald.service. Dec 13 13:59:05.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.149824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:59:05.150341 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 13:59:05.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.151540 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:59:05.154705 systemd[1]: Finished modprobe@drm.service. Dec 13 13:59:05.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.155984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:59:05.156163 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 13:59:05.157397 kernel: loop: module loaded Dec 13 13:59:05.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.157638 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:59:05.157848 systemd[1]: Finished modprobe@fuse.service. Dec 13 13:59:05.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.159017 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:59:05.159291 systemd[1]: Finished modprobe@loop.service. Dec 13 13:59:05.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.160989 systemd[1]: Finished systemd-modules-load.service. Dec 13 13:59:05.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.165006 systemd[1]: Finished systemd-network-generator.service. Dec 13 13:59:05.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.166488 systemd[1]: Finished systemd-remount-fs.service. Dec 13 13:59:05.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.167695 systemd[1]: Reached target network-pre.target. Dec 13 13:59:05.169724 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 13:59:05.171578 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 13:59:05.172293 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:59:05.174345 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 13:59:05.176717 systemd[1]: Starting systemd-journal-flush.service... Dec 13 13:59:05.177716 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:59:05.179058 systemd[1]: Starting systemd-random-seed.service... Dec 13 13:59:05.180081 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 13:59:05.181274 systemd[1]: Starting systemd-sysctl.service... Dec 13 13:59:05.184040 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 13:59:05.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.191254 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 13:59:05.192459 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 13:59:05.193620 systemd[1]: Finished systemd-random-seed.service. Dec 13 13:59:05.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.194769 systemd[1]: Reached target first-boot-complete.target. Dec 13 13:59:05.196962 systemd[1]: Starting systemd-sysusers.service... Dec 13 13:59:05.207593 systemd-journald[1026]: Time spent on flushing to /var/log/journal/3b41f92285ae4f1ba91925fb30cb3694 is 16.680ms for 942 entries. Dec 13 13:59:05.207593 systemd-journald[1026]: System Journal (/var/log/journal/3b41f92285ae4f1ba91925fb30cb3694) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:59:05.234324 systemd-journald[1026]: Received client request to flush runtime journal. Dec 13 13:59:05.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.210136 systemd[1]: Finished systemd-sysctl.service. Dec 13 13:59:05.215578 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 13:59:05.234717 udevadm[1084]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:59:05.217811 systemd[1]: Starting systemd-udev-settle.service... Dec 13 13:59:05.234444 systemd[1]: Finished systemd-sysusers.service. Dec 13 13:59:05.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.236700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 13:59:05.238502 systemd[1]: Finished systemd-journal-flush.service. Dec 13 13:59:05.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.253455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 13:59:05.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.539148 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 13:59:05.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.541393 systemd[1]: Starting systemd-udevd.service... Dec 13 13:59:05.560206 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Dec 13 13:59:05.571624 systemd[1]: Started systemd-udevd.service. Dec 13 13:59:05.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.574321 systemd[1]: Starting systemd-networkd.service... Dec 13 13:59:05.584296 systemd[1]: Starting systemd-userdbd.service... Dec 13 13:59:05.593656 systemd[1]: Found device dev-ttyAMA0.device. Dec 13 13:59:05.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.623605 systemd[1]: Started systemd-userdbd.service. Dec 13 13:59:05.636743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 13:59:05.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.678788 systemd[1]: Finished systemd-udev-settle.service. Dec 13 13:59:05.680977 systemd[1]: Starting lvm2-activation-early.service... Dec 13 13:59:05.693157 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:59:05.696769 systemd-networkd[1101]: lo: Link UP Dec 13 13:59:05.697037 systemd-networkd[1101]: lo: Gained carrier Dec 13 13:59:05.697554 systemd-networkd[1101]: Enumeration completed Dec 13 13:59:05.697725 systemd[1]: Started systemd-networkd.service. Dec 13 13:59:05.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.698734 systemd-networkd[1101]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:59:05.703495 systemd-networkd[1101]: eth0: Link UP Dec 13 13:59:05.703507 systemd-networkd[1101]: eth0: Gained carrier Dec 13 13:59:05.723487 systemd-networkd[1101]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:59:05.732182 systemd[1]: Finished lvm2-activation-early.service. Dec 13 13:59:05.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.733172 systemd[1]: Reached target cryptsetup.target. Dec 13 13:59:05.735163 systemd[1]: Starting lvm2-activation.service... Dec 13 13:59:05.738708 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:59:05.763232 systemd[1]: Finished lvm2-activation.service. Dec 13 13:59:05.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.764211 systemd[1]: Reached target local-fs-pre.target. Dec 13 13:59:05.765091 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:59:05.765123 systemd[1]: Reached target local-fs.target. Dec 13 13:59:05.765925 systemd[1]: Reached target machines.target. Dec 13 13:59:05.767956 systemd[1]: Starting ldconfig.service... Dec 13 13:59:05.769110 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 13:59:05.769165 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:05.770232 systemd[1]: Starting systemd-boot-update.service... Dec 13 13:59:05.772203 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 13:59:05.774437 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 13:59:05.776572 systemd[1]: Starting systemd-sysext.service... Dec 13 13:59:05.777728 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Dec 13 13:59:05.778811 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 13:59:05.786953 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 13:59:05.788449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 13:59:05.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.791430 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 13:59:05.791670 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 13:59:05.804404 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 13:59:05.843807 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 13:59:05.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.853850 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:59:05.868145 systemd-fsck[1144]: fsck.fat 4.2 (2021-01-31) Dec 13 13:59:05.868145 systemd-fsck[1144]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 13:59:05.872394 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 13:59:05.873636 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 13:59:05.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.879216 (sd-sysext)[1148]: Using extensions 'kubernetes'. Dec 13 13:59:05.879556 (sd-sysext)[1148]: Merged extensions into '/usr'. Dec 13 13:59:05.898367 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 13:59:05.899618 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 13:59:05.901557 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 13:59:05.903439 systemd[1]: Starting modprobe@loop.service... Dec 13 13:59:05.904305 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 13:59:05.904444 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:05.905184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:59:05.905325 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 13:59:05.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.906659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:59:05.906798 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 13:59:05.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.908178 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:59:05.908333 systemd[1]: Finished modprobe@loop.service. Dec 13 13:59:05.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:05.909698 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:59:05.909797 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 13:59:05.948598 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:59:05.951933 systemd[1]: Finished ldconfig.service. Dec 13 13:59:05.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.131474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:59:06.133273 systemd[1]: Mounting boot.mount... Dec 13 13:59:06.135176 systemd[1]: Mounting usr-share-oem.mount... Dec 13 13:59:06.142066 systemd[1]: Mounted boot.mount. Dec 13 13:59:06.143028 systemd[1]: Mounted usr-share-oem.mount. Dec 13 13:59:06.145009 systemd[1]: Finished systemd-sysext.service. Dec 13 13:59:06.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.147192 systemd[1]: Starting ensure-sysext.service... Dec 13 13:59:06.149336 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 13:59:06.152629 systemd[1]: Finished systemd-boot-update.service. Dec 13 13:59:06.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.155127 systemd[1]: Reloading. Dec 13 13:59:06.158307 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 13:59:06.159119 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:59:06.160454 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:59:06.193152 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2024-12-13T13:59:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 13:59:06.193613 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2024-12-13T13:59:06Z" level=info msg="torcx already run" Dec 13 13:59:06.253513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 13:59:06.253532 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 13:59:06.268643 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:59:06.313957 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 13:59:06.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.318087 systemd[1]: Starting audit-rules.service... Dec 13 13:59:06.319979 systemd[1]: Starting clean-ca-certificates.service... Dec 13 13:59:06.322009 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 13:59:06.324649 systemd[1]: Starting systemd-resolved.service... Dec 13 13:59:06.327010 systemd[1]: Starting systemd-timesyncd.service... Dec 13 13:59:06.329061 systemd[1]: Starting systemd-update-utmp.service... Dec 13 13:59:06.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.330950 systemd[1]: Finished clean-ca-certificates.service. Dec 13 13:59:06.335752 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.337126 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 13:59:06.339310 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 13:59:06.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.341475 systemd[1]: Starting modprobe@loop.service... Dec 13 13:59:06.342275 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.342457 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:06.342593 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:59:06.343445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:59:06.343614 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 13:59:06.344838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:59:06.344988 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 13:59:06.344000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.348121 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:59:06.348277 systemd[1]: Finished modprobe@loop.service. Dec 13 13:59:06.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.350682 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.351879 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 13:59:06.354034 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 13:59:06.356187 systemd[1]: Starting modprobe@loop.service... Dec 13 13:59:06.357169 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.357365 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:06.357526 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:59:06.358719 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 13:59:06.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.360436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:59:06.360586 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 13:59:06.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.363727 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:59:06.363868 systemd[1]: Finished modprobe@loop.service. Dec 13 13:59:06.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.365291 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:59:06.366862 systemd[1]: Starting systemd-update-done.service... Dec 13 13:59:06.368721 systemd[1]: Finished systemd-update-utmp.service. Dec 13 13:59:06.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.370167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:59:06.370762 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 13:59:06.375737 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.377174 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 13:59:06.380321 systemd[1]: Starting modprobe@drm.service... Dec 13 13:59:06.382983 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 13:59:06.385163 systemd[1]: Starting modprobe@loop.service... Dec 13 13:59:06.387185 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.387356 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:06.388790 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 13:59:06.389816 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:59:06.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.391085 systemd[1]: Finished systemd-update-done.service. Dec 13 13:59:06.392441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:59:06.392589 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 13:59:06.394081 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:59:06.394230 systemd[1]: Finished modprobe@drm.service. Dec 13 13:59:06.397410 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:59:06.397556 systemd[1]: Finished modprobe@loop.service. Dec 13 13:59:06.398700 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.400545 systemd[1]: Finished ensure-sysext.service. Dec 13 13:59:06.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.401763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:59:06.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.401929 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 13:59:06.402977 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:59:06.422997 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:59:06.423325 systemd-timesyncd[1240]: Initial clock synchronization to Fri 2024-12-13 13:59:06.817486 UTC. Dec 13 13:59:06.425348 systemd[1]: Started systemd-timesyncd.service. Dec 13 13:59:06.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:06.426482 systemd[1]: Reached target time-set.target. Dec 13 13:59:06.433000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 13:59:06.433000 audit[1285]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd84cd0e0 a2=420 a3=0 items=0 ppid=1234 pid=1285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 13:59:06.433000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 13:59:06.434502 augenrules[1285]: No rules Dec 13 13:59:06.434458 systemd-resolved[1239]: Positive Trust Anchors: Dec 13 13:59:06.434465 systemd-resolved[1239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:59:06.434494 systemd-resolved[1239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 13:59:06.434843 systemd[1]: Finished audit-rules.service. Dec 13 13:59:06.455261 systemd-resolved[1239]: Defaulting to hostname 'linux'. Dec 13 13:59:06.456620 systemd[1]: Started systemd-resolved.service. Dec 13 13:59:06.457479 systemd[1]: Reached target network.target. Dec 13 13:59:06.458221 systemd[1]: Reached target nss-lookup.target. Dec 13 13:59:06.459021 systemd[1]: Reached target sysinit.target. Dec 13 13:59:06.459879 systemd[1]: Started motdgen.path. Dec 13 13:59:06.460591 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 13:59:06.461784 systemd[1]: Started logrotate.timer. Dec 13 13:59:06.462582 systemd[1]: Started mdadm.timer. Dec 13 13:59:06.463247 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 13:59:06.464141 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:59:06.464177 systemd[1]: Reached target paths.target. Dec 13 13:59:06.464927 systemd[1]: Reached target timers.target. Dec 13 13:59:06.466161 systemd[1]: Listening on dbus.socket. Dec 13 13:59:06.468102 systemd[1]: Starting docker.socket... Dec 13 13:59:06.469910 systemd[1]: Listening on sshd.socket. Dec 13 13:59:06.470789 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:06.471126 systemd[1]: Listening on docker.socket. Dec 13 13:59:06.471921 systemd[1]: Reached target sockets.target. Dec 13 13:59:06.472706 systemd[1]: Reached target basic.target. Dec 13 13:59:06.473574 systemd[1]: System is tainted: cgroupsv1 Dec 13 13:59:06.473625 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.473644 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 13:59:06.474710 systemd[1]: Starting containerd.service... Dec 13 13:59:06.476571 systemd[1]: Starting dbus.service... Dec 13 13:59:06.478317 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 13:59:06.480478 systemd[1]: Starting extend-filesystems.service... Dec 13 13:59:06.481413 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 13:59:06.482862 systemd[1]: Starting motdgen.service... Dec 13 13:59:06.484877 systemd[1]: Starting prepare-helm.service... Dec 13 13:59:06.487267 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 13:59:06.487527 jq[1296]: false Dec 13 13:59:06.491701 systemd[1]: Starting sshd-keygen.service... Dec 13 13:59:06.494271 systemd[1]: Starting systemd-logind.service... Dec 13 13:59:06.496042 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 13:59:06.496121 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:59:06.496589 extend-filesystems[1297]: Found loop1 Dec 13 13:59:06.497319 systemd[1]: Starting update-engine.service... Dec 13 13:59:06.497787 extend-filesystems[1297]: Found vda Dec 13 13:59:06.499075 extend-filesystems[1297]: Found vda1 Dec 13 13:59:06.499424 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 13:59:06.500335 extend-filesystems[1297]: Found vda2 Dec 13 13:59:06.502491 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:59:06.502766 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 13:59:06.503050 jq[1313]: true Dec 13 13:59:06.503901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:59:06.504158 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 13:59:06.509635 extend-filesystems[1297]: Found vda3 Dec 13 13:59:06.509635 extend-filesystems[1297]: Found usr Dec 13 13:59:06.513251 extend-filesystems[1297]: Found vda4 Dec 13 13:59:06.513251 extend-filesystems[1297]: Found vda6 Dec 13 13:59:06.513251 extend-filesystems[1297]: Found vda7 Dec 13 13:59:06.513251 extend-filesystems[1297]: Found vda9 Dec 13 13:59:06.513251 extend-filesystems[1297]: Checking size of /dev/vda9 Dec 13 13:59:06.519553 jq[1322]: true Dec 13 13:59:06.519788 tar[1319]: linux-arm64/helm Dec 13 13:59:06.518437 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:59:06.518693 systemd[1]: Finished motdgen.service. Dec 13 13:59:06.528966 dbus-daemon[1295]: [system] SELinux support is enabled Dec 13 13:59:06.529179 systemd[1]: Started dbus.service. Dec 13 13:59:06.531878 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:59:06.531910 systemd[1]: Reached target system-config.target. Dec 13 13:59:06.532843 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:59:06.532871 systemd[1]: Reached target user-config.target. Dec 13 13:59:06.563949 extend-filesystems[1297]: Resized partition /dev/vda9 Dec 13 13:59:06.576903 extend-filesystems[1353]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 13:59:06.588476 update_engine[1311]: I1213 13:59:06.588269 1311 main.cc:92] Flatcar Update Engine starting Dec 13 13:59:06.591025 systemd[1]: Started update-engine.service. Dec 13 13:59:06.593368 systemd[1]: Started locksmithd.service. Dec 13 13:59:06.596385 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:59:06.598556 update_engine[1311]: I1213 13:59:06.598531 1311 update_check_scheduler.cc:74] Next update check in 2m49s Dec 13 13:59:06.614594 env[1323]: time="2024-12-13T13:59:06.614549320Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 13:59:06.631569 env[1323]: time="2024-12-13T13:59:06.631529760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:59:06.635223 env[1323]: time="2024-12-13T13:59:06.635162120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:59:06.635456 systemd-logind[1308]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:59:06.636043 systemd-logind[1308]: New seat seat0. Dec 13 13:59:06.639518 env[1323]: time="2024-12-13T13:59:06.639474680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:59:06.639573 env[1323]: time="2024-12-13T13:59:06.639534760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:59:06.639859 env[1323]: time="2024-12-13T13:59:06.639832240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:59:06.639904 env[1323]: time="2024-12-13T13:59:06.639857480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:59:06.639904 env[1323]: time="2024-12-13T13:59:06.639871360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 13:59:06.639904 env[1323]: time="2024-12-13T13:59:06.639881040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:59:06.639986 env[1323]: time="2024-12-13T13:59:06.639967680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:59:06.641201 env[1323]: time="2024-12-13T13:59:06.641177400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:59:06.641386 env[1323]: time="2024-12-13T13:59:06.641356680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:59:06.641418 env[1323]: time="2024-12-13T13:59:06.641390400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:59:06.641470 env[1323]: time="2024-12-13T13:59:06.641452000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 13:59:06.641502 env[1323]: time="2024-12-13T13:59:06.641470520Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:59:06.642256 systemd[1]: Started systemd-logind.service. Dec 13 13:59:06.658885 bash[1349]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:59:06.659757 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 13:59:06.662383 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:59:06.676077 extend-filesystems[1353]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:59:06.676077 extend-filesystems[1353]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:59:06.676077 extend-filesystems[1353]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677494840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677545160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677558800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677589960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677680320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677700040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.677713280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678045880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678066800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678081880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678095240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678108640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678244240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:59:06.679756 env[1323]: time="2024-12-13T13:59:06.678318000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678639720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678664920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678679040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678786800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678799800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678811960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678822440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678834400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678846600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678857200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678868880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.678881400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.679017240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.679034000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680035 env[1323]: time="2024-12-13T13:59:06.679045760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680334 extend-filesystems[1297]: Resized filesystem in /dev/vda9 Dec 13 13:59:06.681213 env[1323]: time="2024-12-13T13:59:06.679056800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:59:06.681213 env[1323]: time="2024-12-13T13:59:06.679070880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 13:59:06.681213 env[1323]: time="2024-12-13T13:59:06.679081520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:59:06.681213 env[1323]: time="2024-12-13T13:59:06.679100160Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 13:59:06.681213 env[1323]: time="2024-12-13T13:59:06.679131560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:59:06.680459 systemd[1]: Started containerd.service. Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.679321360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.679389440Z" level=info msg="Connect containerd service" Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.679428240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.679987320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.680268360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.680302560Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:59:06.681424 env[1323]: time="2024-12-13T13:59:06.680349640Z" level=info msg="containerd successfully booted in 0.066455s" Dec 13 13:59:06.682580 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:59:06.682826 systemd[1]: Finished extend-filesystems.service. Dec 13 13:59:06.684748 env[1323]: time="2024-12-13T13:59:06.684698520Z" level=info msg="Start subscribing containerd event" Dec 13 13:59:06.684799 env[1323]: time="2024-12-13T13:59:06.684761720Z" level=info msg="Start recovering state" Dec 13 13:59:06.685000 env[1323]: time="2024-12-13T13:59:06.684835000Z" level=info msg="Start event monitor" Dec 13 13:59:06.685000 env[1323]: time="2024-12-13T13:59:06.684855640Z" level=info msg="Start snapshots syncer" Dec 13 13:59:06.685000 env[1323]: time="2024-12-13T13:59:06.684865560Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:59:06.685000 env[1323]: time="2024-12-13T13:59:06.684881040Z" level=info msg="Start streaming server" Dec 13 13:59:06.698023 locksmithd[1355]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:59:06.931227 tar[1319]: linux-arm64/LICENSE Dec 13 13:59:06.931309 tar[1319]: linux-arm64/README.md Dec 13 13:59:06.935671 systemd[1]: Finished prepare-helm.service. Dec 13 13:59:07.245602 systemd-networkd[1101]: eth0: Gained IPv6LL Dec 13 13:59:07.247351 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 13:59:07.248752 systemd[1]: Reached target network-online.target. Dec 13 13:59:07.251250 systemd[1]: Starting kubelet.service... Dec 13 13:59:07.760023 systemd[1]: Started kubelet.service. Dec 13 13:59:08.275911 kubelet[1381]: E1213 13:59:08.275841 1381 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:59:08.277979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:59:08.278135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:59:10.486048 sshd_keygen[1325]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:59:10.504139 systemd[1]: Finished sshd-keygen.service. Dec 13 13:59:10.506687 systemd[1]: Starting issuegen.service... Dec 13 13:59:10.511615 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:59:10.511839 systemd[1]: Finished issuegen.service. Dec 13 13:59:10.514164 systemd[1]: Starting systemd-user-sessions.service... Dec 13 13:59:10.521756 systemd[1]: Finished systemd-user-sessions.service. Dec 13 13:59:10.524079 systemd[1]: Started getty@tty1.service. Dec 13 13:59:10.526166 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 13:59:10.527283 systemd[1]: Reached target getty.target. Dec 13 13:59:10.528299 systemd[1]: Reached target multi-user.target. Dec 13 13:59:10.530461 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 13:59:10.536796 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 13:59:10.537023 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 13:59:10.538182 systemd[1]: Startup finished in 5.171s (kernel) + 7.369s (userspace) = 12.540s. Dec 13 13:59:11.672850 systemd[1]: Created slice system-sshd.slice. Dec 13 13:59:11.674078 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:44276.service. Dec 13 13:59:11.723987 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 44276 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 13:59:11.727617 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:59:11.738727 systemd[1]: Created slice user-500.slice. Dec 13 13:59:11.739720 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 13:59:11.741921 systemd-logind[1308]: New session 1 of user core. Dec 13 13:59:11.751146 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 13:59:11.752457 systemd[1]: Starting user@500.service... Dec 13 13:59:11.756200 (systemd)[1413]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:59:11.817291 systemd[1413]: Queued start job for default target default.target. Dec 13 13:59:11.817502 systemd[1413]: Reached target paths.target. Dec 13 13:59:11.817519 systemd[1413]: Reached target sockets.target. Dec 13 13:59:11.817530 systemd[1413]: Reached target timers.target. Dec 13 13:59:11.817553 systemd[1413]: Reached target basic.target. Dec 13 13:59:11.817590 systemd[1413]: Reached target default.target. Dec 13 13:59:11.817610 systemd[1413]: Startup finished in 55ms. Dec 13 13:59:11.818449 systemd[1]: Started user@500.service. Dec 13 13:59:11.819887 systemd[1]: Started session-1.scope. Dec 13 13:59:11.873721 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:44278.service. Dec 13 13:59:11.911727 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 44278 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 13:59:11.912977 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:59:11.918345 systemd-logind[1308]: New session 2 of user core. Dec 13 13:59:11.918960 systemd[1]: Started session-2.scope. Dec 13 13:59:11.977098 sshd[1422]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:11.980067 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:44288.service. Dec 13 13:59:11.980533 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:44278.service: Deactivated successfully. Dec 13 13:59:11.981635 systemd-logind[1308]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:59:11.981715 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:59:11.985032 systemd-logind[1308]: Removed session 2. Dec 13 13:59:12.016831 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 44288 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 13:59:12.018223 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:59:12.022620 systemd-logind[1308]: New session 3 of user core. Dec 13 13:59:12.023890 systemd[1]: Started session-3.scope. Dec 13 13:59:12.075367 sshd[1427]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:12.078179 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:44298.service. Dec 13 13:59:12.078928 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:44288.service: Deactivated successfully. Dec 13 13:59:12.079962 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:59:12.080440 systemd-logind[1308]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:59:12.082667 systemd-logind[1308]: Removed session 3. Dec 13 13:59:12.117998 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 44298 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 13:59:12.119619 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:59:12.126493 systemd[1]: Started session-4.scope. Dec 13 13:59:12.127092 systemd-logind[1308]: New session 4 of user core. Dec 13 13:59:12.183471 sshd[1434]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:12.185862 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:44302.service. Dec 13 13:59:12.186377 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:44298.service: Deactivated successfully. Dec 13 13:59:12.187863 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:59:12.188339 systemd-logind[1308]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:59:12.189161 systemd-logind[1308]: Removed session 4. Dec 13 13:59:12.222879 sshd[1441]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 13:59:12.224931 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:59:12.229443 systemd-logind[1308]: New session 5 of user core. Dec 13 13:59:12.230824 systemd[1]: Started session-5.scope. Dec 13 13:59:12.297621 sudo[1447]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:59:12.297847 sudo[1447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 13:59:12.362741 systemd[1]: Starting docker.service... Dec 13 13:59:12.445355 env[1459]: time="2024-12-13T13:59:12.445144951Z" level=info msg="Starting up" Dec 13 13:59:12.446818 env[1459]: time="2024-12-13T13:59:12.446785797Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 13:59:12.446818 env[1459]: time="2024-12-13T13:59:12.446811840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 13:59:12.446901 env[1459]: time="2024-12-13T13:59:12.446836447Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 13:59:12.446901 env[1459]: time="2024-12-13T13:59:12.446857076Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 13:59:12.449009 env[1459]: time="2024-12-13T13:59:12.448963654Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 13:59:12.449009 env[1459]: time="2024-12-13T13:59:12.448992813Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 13:59:12.449009 env[1459]: time="2024-12-13T13:59:12.449008193Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 13:59:12.449123 env[1459]: time="2024-12-13T13:59:12.449017749Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 13:59:12.679572 env[1459]: time="2024-12-13T13:59:12.679482840Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 13:59:12.679757 env[1459]: time="2024-12-13T13:59:12.679739329Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 13:59:12.680011 env[1459]: time="2024-12-13T13:59:12.679988722Z" level=info msg="Loading containers: start." Dec 13 13:59:12.809422 kernel: Initializing XFRM netlink socket Dec 13 13:59:12.833006 env[1459]: time="2024-12-13T13:59:12.832968120Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 13:59:12.887181 systemd-networkd[1101]: docker0: Link UP Dec 13 13:59:12.908821 env[1459]: time="2024-12-13T13:59:12.908782126Z" level=info msg="Loading containers: done." Dec 13 13:59:12.931015 env[1459]: time="2024-12-13T13:59:12.930928981Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:59:12.931317 env[1459]: time="2024-12-13T13:59:12.931296571Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 13:59:12.931499 env[1459]: time="2024-12-13T13:59:12.931480961Z" level=info msg="Daemon has completed initialization" Dec 13 13:59:12.945339 systemd[1]: Started docker.service. Dec 13 13:59:12.949709 env[1459]: time="2024-12-13T13:59:12.949666924Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:59:13.566318 env[1323]: time="2024-12-13T13:59:13.566253659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:59:18.528991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:59:18.529181 systemd[1]: Stopped kubelet.service. Dec 13 13:59:18.530676 systemd[1]: Starting kubelet.service... Dec 13 13:59:18.647022 systemd[1]: Started kubelet.service. Dec 13 13:59:18.696912 kubelet[1604]: E1213 13:59:18.696859 1604 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:59:18.699687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:59:18.699839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:59:19.452319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038591707.mount: Deactivated successfully. Dec 13 13:59:20.779034 env[1323]: time="2024-12-13T13:59:20.778982742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:20.780617 env[1323]: time="2024-12-13T13:59:20.780586887Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:20.782439 env[1323]: time="2024-12-13T13:59:20.782412782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:20.784725 env[1323]: time="2024-12-13T13:59:20.784692629Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:20.785680 env[1323]: time="2024-12-13T13:59:20.785656011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 13:59:20.794741 env[1323]: time="2024-12-13T13:59:20.794708141Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:59:22.380519 env[1323]: time="2024-12-13T13:59:22.380466099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:22.382795 env[1323]: time="2024-12-13T13:59:22.382756119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:22.384511 env[1323]: time="2024-12-13T13:59:22.384475608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:22.386230 env[1323]: time="2024-12-13T13:59:22.386196062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:22.387799 env[1323]: time="2024-12-13T13:59:22.387762619Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 13:59:22.398004 env[1323]: time="2024-12-13T13:59:22.397971836Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:59:23.615426 env[1323]: time="2024-12-13T13:59:23.615347054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:23.618843 env[1323]: time="2024-12-13T13:59:23.618801253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:23.621266 env[1323]: time="2024-12-13T13:59:23.620373033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:23.622274 env[1323]: time="2024-12-13T13:59:23.622246037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:23.623214 env[1323]: time="2024-12-13T13:59:23.623177269Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 13:59:23.632146 env[1323]: time="2024-12-13T13:59:23.632111838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:59:24.637423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200645178.mount: Deactivated successfully. Dec 13 13:59:25.159509 env[1323]: time="2024-12-13T13:59:25.159466791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:25.160862 env[1323]: time="2024-12-13T13:59:25.160838681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:25.162129 env[1323]: time="2024-12-13T13:59:25.162083004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:25.163323 env[1323]: time="2024-12-13T13:59:25.163277506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:25.163719 env[1323]: time="2024-12-13T13:59:25.163693593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 13:59:25.172204 env[1323]: time="2024-12-13T13:59:25.172178054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:59:25.759408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215366463.mount: Deactivated successfully. Dec 13 13:59:26.698779 env[1323]: time="2024-12-13T13:59:26.698735080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:26.700384 env[1323]: time="2024-12-13T13:59:26.700333732Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:26.702948 env[1323]: time="2024-12-13T13:59:26.702922914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:26.704642 env[1323]: time="2024-12-13T13:59:26.704617459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:26.706319 env[1323]: time="2024-12-13T13:59:26.706288232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:59:26.716025 env[1323]: time="2024-12-13T13:59:26.716001258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:59:27.121339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount908965252.mount: Deactivated successfully. Dec 13 13:59:27.125820 env[1323]: time="2024-12-13T13:59:27.125777522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:27.127663 env[1323]: time="2024-12-13T13:59:27.127626733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:27.129227 env[1323]: time="2024-12-13T13:59:27.129195068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:27.130824 env[1323]: time="2024-12-13T13:59:27.130796958Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:27.131286 env[1323]: time="2024-12-13T13:59:27.131255398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:59:27.142025 env[1323]: time="2024-12-13T13:59:27.141982010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:59:27.661134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919157934.mount: Deactivated successfully. Dec 13 13:59:28.754904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:59:28.755081 systemd[1]: Stopped kubelet.service. Dec 13 13:59:28.756515 systemd[1]: Starting kubelet.service... Dec 13 13:59:28.833244 systemd[1]: Started kubelet.service. Dec 13 13:59:28.884626 kubelet[1659]: E1213 13:59:28.884561 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:59:28.886874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:59:28.887020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:59:29.596547 env[1323]: time="2024-12-13T13:59:29.596486044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:29.598427 env[1323]: time="2024-12-13T13:59:29.598394258Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:29.600722 env[1323]: time="2024-12-13T13:59:29.600685270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:29.602713 env[1323]: time="2024-12-13T13:59:29.602677022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:29.603539 env[1323]: time="2024-12-13T13:59:29.603502975Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 13:59:34.672962 systemd[1]: Stopped kubelet.service. Dec 13 13:59:34.675203 systemd[1]: Starting kubelet.service... Dec 13 13:59:34.694298 systemd[1]: Reloading. Dec 13 13:59:34.746242 /usr/lib/systemd/system-generators/torcx-generator[1773]: time="2024-12-13T13:59:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 13:59:34.746274 /usr/lib/systemd/system-generators/torcx-generator[1773]: time="2024-12-13T13:59:34Z" level=info msg="torcx already run" Dec 13 13:59:34.811954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 13:59:34.811972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 13:59:34.827297 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:59:34.890642 systemd[1]: Started kubelet.service. Dec 13 13:59:34.892072 systemd[1]: Stopping kubelet.service... Dec 13 13:59:34.892468 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:59:34.892705 systemd[1]: Stopped kubelet.service. Dec 13 13:59:34.894410 systemd[1]: Starting kubelet.service... Dec 13 13:59:34.969975 systemd[1]: Started kubelet.service. Dec 13 13:59:35.014486 kubelet[1832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:59:35.014819 kubelet[1832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:59:35.014819 kubelet[1832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:59:35.014819 kubelet[1832]: I1213 13:59:35.014598 1832 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:59:35.688182 kubelet[1832]: I1213 13:59:35.688139 1832 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:59:35.688182 kubelet[1832]: I1213 13:59:35.688175 1832 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:59:35.688412 kubelet[1832]: I1213 13:59:35.688399 1832 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:59:35.711248 kubelet[1832]: I1213 13:59:35.711214 1832 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:59:35.711526 kubelet[1832]: E1213 13:59:35.711505 1832 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.718441 kubelet[1832]: I1213 13:59:35.718417 1832 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:59:35.719266 kubelet[1832]: I1213 13:59:35.719234 1832 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:59:35.719444 kubelet[1832]: I1213 13:59:35.719424 1832 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:59:35.719522 kubelet[1832]: I1213 13:59:35.719448 1832 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:59:35.719522 kubelet[1832]: I1213 13:59:35.719458 1832 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:59:35.719576 kubelet[1832]: I1213 13:59:35.719564 1832 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:59:35.722028 kubelet[1832]: I1213 13:59:35.722004 1832 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:59:35.722079 kubelet[1832]: I1213 13:59:35.722037 1832 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:59:35.722079 kubelet[1832]: I1213 13:59:35.722060 1832 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:59:35.722079 kubelet[1832]: I1213 13:59:35.722070 1832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:59:35.722735 kubelet[1832]: I1213 13:59:35.722716 1832 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 13:59:35.723129 kubelet[1832]: I1213 13:59:35.723106 1832 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:59:35.723226 kubelet[1832]: W1213 13:59:35.723211 1832 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:59:35.723897 kubelet[1832]: I1213 13:59:35.723868 1832 server.go:1256] "Started kubelet" Dec 13 13:59:35.724330 kubelet[1832]: W1213 13:59:35.724292 1832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.724486 kubelet[1832]: E1213 13:59:35.724471 1832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.724610 kubelet[1832]: W1213 13:59:35.724562 1832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.724610 kubelet[1832]: E1213 13:59:35.724609 1832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.724732 kubelet[1832]: I1213 13:59:35.724717 1832 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:59:35.729639 kubelet[1832]: I1213 13:59:35.729428 1832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:59:35.729770 kubelet[1832]: I1213 13:59:35.729745 1832 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:59:35.730451 kubelet[1832]: I1213 13:59:35.730431 1832 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:59:35.733572 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 13:59:35.735055 kubelet[1832]: E1213 13:59:35.735026 1832 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c148fa9a1eb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:59:35.723851449 +0000 UTC m=+0.749972316,LastTimestamp:2024-12-13 13:59:35.723851449 +0000 UTC m=+0.749972316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:59:35.737047 kubelet[1832]: I1213 13:59:35.737014 1832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:59:35.737682 kubelet[1832]: I1213 13:59:35.737518 1832 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:59:35.737682 kubelet[1832]: I1213 13:59:35.737594 1832 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:59:35.737682 kubelet[1832]: I1213 13:59:35.737654 1832 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:59:35.737810 kubelet[1832]: E1213 13:59:35.737783 1832 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:59:35.737841 kubelet[1832]: W1213 13:59:35.737799 1832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.737841 kubelet[1832]: E1213 13:59:35.737835 1832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.737956 kubelet[1832]: E1213 13:59:35.737929 1832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Dec 13 13:59:35.738326 kubelet[1832]: I1213 13:59:35.738307 1832 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:59:35.738430 kubelet[1832]: I1213 13:59:35.738413 1832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:59:35.739339 kubelet[1832]: I1213 13:59:35.739320 1832 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:59:35.752350 kubelet[1832]: I1213 13:59:35.752298 1832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:59:35.753623 kubelet[1832]: I1213 13:59:35.753597 1832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:59:35.753623 kubelet[1832]: I1213 13:59:35.753619 1832 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:59:35.753705 kubelet[1832]: I1213 13:59:35.753635 1832 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:59:35.753705 kubelet[1832]: E1213 13:59:35.753692 1832 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:59:35.754590 kubelet[1832]: W1213 13:59:35.754424 1832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.754590 kubelet[1832]: E1213 13:59:35.754473 1832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:35.758102 kubelet[1832]: I1213 13:59:35.758068 1832 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:59:35.758102 kubelet[1832]: I1213 13:59:35.758098 1832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:59:35.758207 kubelet[1832]: I1213 13:59:35.758115 1832 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:59:35.832785 kubelet[1832]: I1213 13:59:35.832742 1832 policy_none.go:49] "None policy: Start" Dec 13 13:59:35.833690 kubelet[1832]: I1213 13:59:35.833668 1832 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:59:35.833784 kubelet[1832]: I1213 13:59:35.833721 1832 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:59:35.839017 kubelet[1832]: I1213 13:59:35.838990 1832 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:59:35.839684 kubelet[1832]: E1213 13:59:35.839667 1832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Dec 13 13:59:35.839784 kubelet[1832]: I1213 13:59:35.839677 1832 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:59:35.840040 kubelet[1832]: I1213 13:59:35.840020 1832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:59:35.841193 kubelet[1832]: E1213 13:59:35.841156 1832 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:59:35.854395 kubelet[1832]: I1213 13:59:35.854367 1832 topology_manager.go:215] "Topology Admit Handler" podUID="d6435a06bb2fe4b9715faa238f50149e" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:59:35.855388 kubelet[1832]: I1213 13:59:35.855347 1832 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:59:35.856035 kubelet[1832]: I1213 13:59:35.856010 1832 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:59:35.938685 kubelet[1832]: E1213 13:59:35.938603 1832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Dec 13 13:59:36.039020 kubelet[1832]: I1213 13:59:36.038995 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:36.039418 kubelet[1832]: I1213 13:59:36.039359 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:36.039547 kubelet[1832]: I1213 13:59:36.039532 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:36.039645 kubelet[1832]: I1213 13:59:36.039635 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:36.039736 kubelet[1832]: I1213 13:59:36.039718 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:59:36.039827 kubelet[1832]: I1213 13:59:36.039817 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6435a06bb2fe4b9715faa238f50149e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6435a06bb2fe4b9715faa238f50149e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:36.039920 kubelet[1832]: I1213 13:59:36.039910 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6435a06bb2fe4b9715faa238f50149e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d6435a06bb2fe4b9715faa238f50149e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:36.040021 kubelet[1832]: I1213 13:59:36.040010 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6435a06bb2fe4b9715faa238f50149e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6435a06bb2fe4b9715faa238f50149e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:36.040099 kubelet[1832]: I1213 13:59:36.040090 1832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:36.040638 kubelet[1832]: I1213 13:59:36.040601 1832 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:59:36.040895 kubelet[1832]: E1213 13:59:36.040881 1832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Dec 13 13:59:36.160592 kubelet[1832]: E1213 13:59:36.160559 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:36.160700 kubelet[1832]: E1213 13:59:36.160673 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:36.161404 env[1323]: time="2024-12-13T13:59:36.161196098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d6435a06bb2fe4b9715faa238f50149e,Namespace:kube-system,Attempt:0,}" Dec 13 13:59:36.161404 env[1323]: time="2024-12-13T13:59:36.161201263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 13:59:36.162258 kubelet[1832]: E1213 13:59:36.162221 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:36.162967 env[1323]: time="2024-12-13T13:59:36.162573470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 13:59:36.339837 kubelet[1832]: E1213 13:59:36.339808 1832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Dec 13 13:59:36.442932 kubelet[1832]: I1213 13:59:36.442889 1832 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:59:36.443273 kubelet[1832]: E1213 13:59:36.443255 1832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Dec 13 13:59:36.625146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689863366.mount: Deactivated successfully. Dec 13 13:59:36.635741 env[1323]: time="2024-12-13T13:59:36.635614588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.636643 env[1323]: time="2024-12-13T13:59:36.636588746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.638982 env[1323]: time="2024-12-13T13:59:36.638939156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.640521 env[1323]: time="2024-12-13T13:59:36.640489105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.642162 env[1323]: time="2024-12-13T13:59:36.642133311Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.642923 env[1323]: time="2024-12-13T13:59:36.642898576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.645658 env[1323]: time="2024-12-13T13:59:36.645628895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.650700 env[1323]: time="2024-12-13T13:59:36.650665619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.653616 env[1323]: time="2024-12-13T13:59:36.653583651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.654571 env[1323]: time="2024-12-13T13:59:36.654541673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.655190 env[1323]: time="2024-12-13T13:59:36.655163110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.656190 env[1323]: time="2024-12-13T13:59:36.656156489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 13:59:36.688927 env[1323]: time="2024-12-13T13:59:36.688755992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:59:36.688927 env[1323]: time="2024-12-13T13:59:36.688799997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:59:36.688927 env[1323]: time="2024-12-13T13:59:36.688822420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:59:36.689116 env[1323]: time="2024-12-13T13:59:36.689015298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:59:36.689116 env[1323]: time="2024-12-13T13:59:36.689044648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:59:36.689223 env[1323]: time="2024-12-13T13:59:36.689062306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:59:36.689389 env[1323]: time="2024-12-13T13:59:36.689342353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a84cb58cadd05fe5450730d22688615a433167df1cb47b99b6ec0fb019d3ede9 pid=1890 runtime=io.containerd.runc.v2 Dec 13 13:59:36.689534 env[1323]: time="2024-12-13T13:59:36.689498033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd90c3a0bf6b1a61ff34e2177ea9cb86ed39897ff656bdf29011c85d7c1fb622 pid=1889 runtime=io.containerd.runc.v2 Dec 13 13:59:36.690654 env[1323]: time="2024-12-13T13:59:36.690595438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:59:36.690654 env[1323]: time="2024-12-13T13:59:36.690626270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:59:36.690777 env[1323]: time="2024-12-13T13:59:36.690636400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:59:36.690983 env[1323]: time="2024-12-13T13:59:36.690950963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f6794bf912f05e5f9928d858e928f191dfb59f236a8663f6c97b8020a0f4ad1 pid=1891 runtime=io.containerd.runc.v2 Dec 13 13:59:36.767412 env[1323]: time="2024-12-13T13:59:36.765542680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d6435a06bb2fe4b9715faa238f50149e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd90c3a0bf6b1a61ff34e2177ea9cb86ed39897ff656bdf29011c85d7c1fb622\"" Dec 13 13:59:36.767551 kubelet[1832]: E1213 13:59:36.766771 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:36.769718 env[1323]: time="2024-12-13T13:59:36.769682564Z" level=info msg="CreateContainer within sandbox \"fd90c3a0bf6b1a61ff34e2177ea9cb86ed39897ff656bdf29011c85d7c1fb622\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:59:36.779545 env[1323]: time="2024-12-13T13:59:36.779493743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6794bf912f05e5f9928d858e928f191dfb59f236a8663f6c97b8020a0f4ad1\"" Dec 13 13:59:36.779648 env[1323]: time="2024-12-13T13:59:36.779545196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a84cb58cadd05fe5450730d22688615a433167df1cb47b99b6ec0fb019d3ede9\"" Dec 13 13:59:36.780253 kubelet[1832]: E1213 13:59:36.780226 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:36.780430 kubelet[1832]: E1213 13:59:36.780414 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:36.782056 env[1323]: time="2024-12-13T13:59:36.782016290Z" level=info msg="CreateContainer within sandbox \"fd90c3a0bf6b1a61ff34e2177ea9cb86ed39897ff656bdf29011c85d7c1fb622\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bad618ceb0e31f7d808f22e375d94a8b0b313a5406477beda362fc2f324b0be\"" Dec 13 13:59:36.782811 env[1323]: time="2024-12-13T13:59:36.782774467Z" level=info msg="StartContainer for \"2bad618ceb0e31f7d808f22e375d94a8b0b313a5406477beda362fc2f324b0be\"" Dec 13 13:59:36.782981 env[1323]: time="2024-12-13T13:59:36.782954772Z" level=info msg="CreateContainer within sandbox \"8f6794bf912f05e5f9928d858e928f191dfb59f236a8663f6c97b8020a0f4ad1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:59:36.783155 env[1323]: time="2024-12-13T13:59:36.782792525Z" level=info msg="CreateContainer within sandbox \"a84cb58cadd05fe5450730d22688615a433167df1cb47b99b6ec0fb019d3ede9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:59:36.804943 env[1323]: time="2024-12-13T13:59:36.804877929Z" level=info msg="CreateContainer within sandbox \"a84cb58cadd05fe5450730d22688615a433167df1cb47b99b6ec0fb019d3ede9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5a678c767bb9465093e34c633b65d080003885bfb7d22e9315c1e28e01746689\"" Dec 13 13:59:36.806743 env[1323]: time="2024-12-13T13:59:36.806699477Z" level=info msg="StartContainer for \"5a678c767bb9465093e34c633b65d080003885bfb7d22e9315c1e28e01746689\"" Dec 13 13:59:36.831453 env[1323]: time="2024-12-13T13:59:36.831407609Z" level=info msg="CreateContainer within sandbox \"8f6794bf912f05e5f9928d858e928f191dfb59f236a8663f6c97b8020a0f4ad1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df995ec8fd4951faa118c442b1443d700e5ed7eb5b1a2ac3b19b204cc8581f79\"" Dec 13 13:59:36.831928 env[1323]: time="2024-12-13T13:59:36.831899874Z" level=info msg="StartContainer for \"df995ec8fd4951faa118c442b1443d700e5ed7eb5b1a2ac3b19b204cc8581f79\"" Dec 13 13:59:36.857173 kubelet[1832]: W1213 13:59:36.853533 1832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:36.857173 kubelet[1832]: E1213 13:59:36.853596 1832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:36.875829 env[1323]: time="2024-12-13T13:59:36.874647582Z" level=info msg="StartContainer for \"5a678c767bb9465093e34c633b65d080003885bfb7d22e9315c1e28e01746689\" returns successfully" Dec 13 13:59:36.905196 env[1323]: time="2024-12-13T13:59:36.905067131Z" level=info msg="StartContainer for \"2bad618ceb0e31f7d808f22e375d94a8b0b313a5406477beda362fc2f324b0be\" returns successfully" Dec 13 13:59:36.931676 env[1323]: time="2024-12-13T13:59:36.931632127Z" level=info msg="StartContainer for \"df995ec8fd4951faa118c442b1443d700e5ed7eb5b1a2ac3b19b204cc8581f79\" returns successfully" Dec 13 13:59:37.024851 kubelet[1832]: W1213 13:59:37.024776 1832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:37.024851 kubelet[1832]: E1213 13:59:37.024856 1832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Dec 13 13:59:37.244725 kubelet[1832]: I1213 13:59:37.244626 1832 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:59:37.762735 kubelet[1832]: E1213 13:59:37.762703 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:37.764854 kubelet[1832]: E1213 13:59:37.764827 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:37.766880 kubelet[1832]: E1213 13:59:37.766863 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:38.529688 kubelet[1832]: E1213 13:59:38.529654 1832 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:59:38.612501 kubelet[1832]: I1213 13:59:38.612459 1832 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:59:38.634958 kubelet[1832]: E1213 13:59:38.634928 1832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:59:38.735927 kubelet[1832]: E1213 13:59:38.735883 1832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:59:38.768798 kubelet[1832]: E1213 13:59:38.768772 1832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:38.837274 kubelet[1832]: E1213 13:59:38.837188 1832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:59:38.937791 kubelet[1832]: E1213 13:59:38.937762 1832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:59:39.725218 kubelet[1832]: I1213 13:59:39.725183 1832 apiserver.go:52] "Watching apiserver" Dec 13 13:59:39.737865 kubelet[1832]: I1213 13:59:39.737840 1832 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:59:41.104425 systemd[1]: Reloading. Dec 13 13:59:41.143462 /usr/lib/systemd/system-generators/torcx-generator[2126]: time="2024-12-13T13:59:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 13:59:41.147666 /usr/lib/systemd/system-generators/torcx-generator[2126]: time="2024-12-13T13:59:41Z" level=info msg="torcx already run" Dec 13 13:59:41.208886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 13:59:41.208906 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 13:59:41.224349 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:59:41.301646 systemd[1]: Stopping kubelet.service... Dec 13 13:59:41.321696 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:59:41.321968 systemd[1]: Stopped kubelet.service. Dec 13 13:59:41.323641 systemd[1]: Starting kubelet.service... Dec 13 13:59:41.408597 systemd[1]: Started kubelet.service. Dec 13 13:59:41.451753 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:59:41.452092 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:59:41.452139 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:59:41.452273 kubelet[2178]: I1213 13:59:41.452238 2178 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:59:41.457140 kubelet[2178]: I1213 13:59:41.457107 2178 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:59:41.457271 kubelet[2178]: I1213 13:59:41.457259 2178 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:59:41.457540 kubelet[2178]: I1213 13:59:41.457519 2178 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:59:41.459118 kubelet[2178]: I1213 13:59:41.459098 2178 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:59:41.460941 kubelet[2178]: I1213 13:59:41.460904 2178 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:59:41.467840 kubelet[2178]: I1213 13:59:41.467798 2178 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:59:41.468212 kubelet[2178]: I1213 13:59:41.468186 2178 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:59:41.468436 kubelet[2178]: I1213 13:59:41.468379 2178 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:59:41.468436 kubelet[2178]: I1213 13:59:41.468408 2178 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:59:41.468436 kubelet[2178]: I1213 13:59:41.468418 2178 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:59:41.468592 kubelet[2178]: I1213 13:59:41.468449 2178 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:59:41.468592 kubelet[2178]: I1213 13:59:41.468531 2178 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:59:41.468592 kubelet[2178]: I1213 13:59:41.468544 2178 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:59:41.468592 kubelet[2178]: I1213 13:59:41.468562 2178 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:59:41.468592 kubelet[2178]: I1213 13:59:41.468576 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:59:41.469243 kubelet[2178]: I1213 13:59:41.469222 2178 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 13:59:41.469592 kubelet[2178]: I1213 13:59:41.469577 2178 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:59:41.470078 kubelet[2178]: I1213 13:59:41.470061 2178 server.go:1256] "Started kubelet" Dec 13 13:59:41.470338 kubelet[2178]: I1213 13:59:41.470303 2178 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:59:41.470546 kubelet[2178]: I1213 13:59:41.470528 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:59:41.470890 kubelet[2178]: I1213 13:59:41.470869 2178 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:59:41.471102 kubelet[2178]: I1213 13:59:41.471069 2178 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:59:41.472915 kubelet[2178]: I1213 13:59:41.472876 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:59:41.475463 kubelet[2178]: E1213 13:59:41.475444 2178 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:59:41.478122 kubelet[2178]: E1213 13:59:41.478096 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:59:41.478262 kubelet[2178]: I1213 13:59:41.478247 2178 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:59:41.478433 kubelet[2178]: I1213 13:59:41.478417 2178 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:59:41.478624 kubelet[2178]: I1213 13:59:41.478610 2178 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:59:41.492645 kubelet[2178]: I1213 13:59:41.492606 2178 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:59:41.492869 kubelet[2178]: I1213 13:59:41.492837 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:59:41.501449 sudo[2198]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:59:41.501673 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 13:59:41.502263 kubelet[2178]: I1213 13:59:41.502238 2178 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:59:41.509538 kubelet[2178]: I1213 13:59:41.508512 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:59:41.512012 kubelet[2178]: I1213 13:59:41.511752 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:59:41.512012 kubelet[2178]: I1213 13:59:41.511781 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:59:41.512012 kubelet[2178]: I1213 13:59:41.511894 2178 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:59:41.512012 kubelet[2178]: E1213 13:59:41.511951 2178 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:59:41.563491 kubelet[2178]: I1213 13:59:41.563459 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:59:41.563491 kubelet[2178]: I1213 13:59:41.563484 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:59:41.563491 kubelet[2178]: I1213 13:59:41.563501 2178 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:59:41.563673 kubelet[2178]: I1213 13:59:41.563646 2178 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:59:41.563673 kubelet[2178]: I1213 13:59:41.563665 2178 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:59:41.563673 kubelet[2178]: I1213 13:59:41.563672 2178 policy_none.go:49] "None policy: Start" Dec 13 13:59:41.564180 kubelet[2178]: I1213 13:59:41.564158 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:59:41.564180 kubelet[2178]: I1213 13:59:41.564182 2178 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:59:41.564354 kubelet[2178]: I1213 13:59:41.564336 2178 state_mem.go:75] "Updated machine memory state" Dec 13 13:59:41.565453 kubelet[2178]: I1213 13:59:41.565432 2178 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:59:41.565684 kubelet[2178]: I1213 13:59:41.565665 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:59:41.582923 kubelet[2178]: I1213 13:59:41.582888 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:59:41.588316 kubelet[2178]: I1213 13:59:41.588283 2178 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:59:41.588414 kubelet[2178]: I1213 13:59:41.588396 2178 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:59:41.612999 kubelet[2178]: I1213 13:59:41.612959 2178 topology_manager.go:215] "Topology Admit Handler" podUID="d6435a06bb2fe4b9715faa238f50149e" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:59:41.613090 kubelet[2178]: I1213 13:59:41.613052 2178 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:59:41.613116 kubelet[2178]: I1213 13:59:41.613103 2178 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:59:41.680396 kubelet[2178]: I1213 13:59:41.680273 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6435a06bb2fe4b9715faa238f50149e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6435a06bb2fe4b9715faa238f50149e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:41.680507 kubelet[2178]: I1213 13:59:41.680365 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:41.680507 kubelet[2178]: I1213 13:59:41.680484 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6435a06bb2fe4b9715faa238f50149e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6435a06bb2fe4b9715faa238f50149e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:41.680562 kubelet[2178]: I1213 13:59:41.680514 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6435a06bb2fe4b9715faa238f50149e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d6435a06bb2fe4b9715faa238f50149e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:41.680587 kubelet[2178]: I1213 13:59:41.680535 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:41.680587 kubelet[2178]: I1213 13:59:41.680583 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:41.680635 kubelet[2178]: I1213 13:59:41.680601 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:41.680675 kubelet[2178]: I1213 13:59:41.680657 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:59:41.680735 kubelet[2178]: I1213 13:59:41.680720 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:59:41.923939 kubelet[2178]: E1213 13:59:41.923900 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:41.924519 kubelet[2178]: E1213 13:59:41.924481 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:41.925802 kubelet[2178]: E1213 13:59:41.925776 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:41.967089 sudo[2198]: pam_unix(sudo:session): session closed for user root Dec 13 13:59:42.469465 kubelet[2178]: I1213 13:59:42.469423 2178 apiserver.go:52] "Watching apiserver" Dec 13 13:59:42.479184 kubelet[2178]: I1213 13:59:42.479161 2178 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:59:42.541712 kubelet[2178]: E1213 13:59:42.541676 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:42.544920 kubelet[2178]: E1213 13:59:42.544888 2178 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:59:42.545497 kubelet[2178]: E1213 13:59:42.545481 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:42.545586 kubelet[2178]: E1213 13:59:42.545572 2178 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 13:59:42.545815 kubelet[2178]: E1213 13:59:42.545800 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:42.575720 kubelet[2178]: I1213 13:59:42.575680 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5756284520000001 podStartE2EDuration="1.575628452s" podCreationTimestamp="2024-12-13 13:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:59:42.561993166 +0000 UTC m=+1.149004536" watchObservedRunningTime="2024-12-13 13:59:42.575628452 +0000 UTC m=+1.162639822" Dec 13 13:59:42.584800 kubelet[2178]: I1213 13:59:42.584758 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.584717161 podStartE2EDuration="1.584717161s" podCreationTimestamp="2024-12-13 13:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:59:42.575908234 +0000 UTC m=+1.162919604" watchObservedRunningTime="2024-12-13 13:59:42.584717161 +0000 UTC m=+1.171728531" Dec 13 13:59:42.595004 kubelet[2178]: I1213 13:59:42.594862 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.59483203 podStartE2EDuration="1.59483203s" podCreationTimestamp="2024-12-13 13:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:59:42.585093233 +0000 UTC m=+1.172104563" watchObservedRunningTime="2024-12-13 13:59:42.59483203 +0000 UTC m=+1.181843400" Dec 13 13:59:43.541481 kubelet[2178]: E1213 13:59:43.541456 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:43.542332 kubelet[2178]: E1213 13:59:43.542267 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:43.982304 sudo[1447]: pam_unix(sudo:session): session closed for user root Dec 13 13:59:43.983872 sshd[1441]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:43.986293 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:44302.service: Deactivated successfully. Dec 13 13:59:43.987416 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:59:43.987855 systemd-logind[1308]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:59:43.988572 systemd-logind[1308]: Removed session 5. Dec 13 13:59:44.542719 kubelet[2178]: E1213 13:59:44.542690 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:50.722879 kubelet[2178]: E1213 13:59:50.722837 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:51.516221 kubelet[2178]: E1213 13:59:51.516192 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:51.554424 kubelet[2178]: E1213 13:59:51.554105 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:51.555084 kubelet[2178]: E1213 13:59:51.555065 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:51.998909 update_engine[1311]: I1213 13:59:51.998600 1311 update_attempter.cc:509] Updating boot flags... Dec 13 13:59:53.230121 kubelet[2178]: E1213 13:59:53.230036 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:55.174480 kubelet[2178]: I1213 13:59:55.174437 2178 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:59:55.175329 env[1323]: time="2024-12-13T13:59:55.175287540Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:59:55.175601 kubelet[2178]: I1213 13:59:55.175553 2178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:59:55.642532 kubelet[2178]: I1213 13:59:55.642456 2178 topology_manager.go:215] "Topology Admit Handler" podUID="48e01787-3252-4084-b35c-be72f664abf4" podNamespace="kube-system" podName="kube-proxy-57gsj" Dec 13 13:59:55.647544 kubelet[2178]: W1213 13:59:55.645609 2178 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 13:59:55.647544 kubelet[2178]: E1213 13:59:55.645639 2178 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 13:59:55.655600 kubelet[2178]: I1213 13:59:55.652947 2178 topology_manager.go:215] "Topology Admit Handler" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" podNamespace="kube-system" podName="cilium-4mdtn" Dec 13 13:59:55.655600 kubelet[2178]: W1213 13:59:55.655372 2178 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 13:59:55.655600 kubelet[2178]: E1213 13:59:55.655413 2178 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 13:59:55.687885 kubelet[2178]: I1213 13:59:55.687836 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb4ae995-abc2-457d-a5ea-088f4d0ec161-clustermesh-secrets\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688120 kubelet[2178]: I1213 13:59:55.688106 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-bpf-maps\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688219 kubelet[2178]: I1213 13:59:55.688209 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hostproc\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688350 kubelet[2178]: I1213 13:59:55.688314 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48e01787-3252-4084-b35c-be72f664abf4-kube-proxy\") pod \"kube-proxy-57gsj\" (UID: \"48e01787-3252-4084-b35c-be72f664abf4\") " pod="kube-system/kube-proxy-57gsj" Dec 13 13:59:55.688399 kubelet[2178]: I1213 13:59:55.688358 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48e01787-3252-4084-b35c-be72f664abf4-xtables-lock\") pod \"kube-proxy-57gsj\" (UID: \"48e01787-3252-4084-b35c-be72f664abf4\") " pod="kube-system/kube-proxy-57gsj" Dec 13 13:59:55.688399 kubelet[2178]: I1213 13:59:55.688389 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-run\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688466 kubelet[2178]: I1213 13:59:55.688410 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-xtables-lock\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688466 kubelet[2178]: I1213 13:59:55.688430 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-lib-modules\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688466 kubelet[2178]: I1213 13:59:55.688456 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-etc-cni-netd\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688550 kubelet[2178]: I1213 13:59:55.688477 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cni-path\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688550 kubelet[2178]: I1213 13:59:55.688496 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-kernel\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688550 kubelet[2178]: I1213 13:59:55.688524 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hubble-tls\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688550 kubelet[2178]: I1213 13:59:55.688545 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbvd\" (UniqueName: \"kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-kube-api-access-gjbvd\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688644 kubelet[2178]: I1213 13:59:55.688564 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48e01787-3252-4084-b35c-be72f664abf4-lib-modules\") pod \"kube-proxy-57gsj\" (UID: \"48e01787-3252-4084-b35c-be72f664abf4\") " pod="kube-system/kube-proxy-57gsj" Dec 13 13:59:55.688644 kubelet[2178]: I1213 13:59:55.688590 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-cgroup\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688644 kubelet[2178]: I1213 13:59:55.688613 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-config-path\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688644 kubelet[2178]: I1213 13:59:55.688635 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-net\") pod \"cilium-4mdtn\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " pod="kube-system/cilium-4mdtn" Dec 13 13:59:55.688732 kubelet[2178]: I1213 13:59:55.688655 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssd2\" (UniqueName: \"kubernetes.io/projected/48e01787-3252-4084-b35c-be72f664abf4-kube-api-access-mssd2\") pod \"kube-proxy-57gsj\" (UID: \"48e01787-3252-4084-b35c-be72f664abf4\") " pod="kube-system/kube-proxy-57gsj" Dec 13 13:59:56.141011 kubelet[2178]: I1213 13:59:56.140957 2178 topology_manager.go:215] "Topology Admit Handler" podUID="aa7be849-98f2-4a76-b6fb-fbbe6335789a" podNamespace="kube-system" podName="cilium-operator-5cc964979-rjzgh" Dec 13 13:59:56.192100 kubelet[2178]: I1213 13:59:56.192029 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwtnd\" (UniqueName: \"kubernetes.io/projected/aa7be849-98f2-4a76-b6fb-fbbe6335789a-kube-api-access-zwtnd\") pod \"cilium-operator-5cc964979-rjzgh\" (UID: \"aa7be849-98f2-4a76-b6fb-fbbe6335789a\") " pod="kube-system/cilium-operator-5cc964979-rjzgh" Dec 13 13:59:56.192100 kubelet[2178]: I1213 13:59:56.192090 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa7be849-98f2-4a76-b6fb-fbbe6335789a-cilium-config-path\") pod \"cilium-operator-5cc964979-rjzgh\" (UID: \"aa7be849-98f2-4a76-b6fb-fbbe6335789a\") " pod="kube-system/cilium-operator-5cc964979-rjzgh" Dec 13 13:59:56.743755 kubelet[2178]: E1213 13:59:56.743730 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:56.744790 env[1323]: time="2024-12-13T13:59:56.744739508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rjzgh,Uid:aa7be849-98f2-4a76-b6fb-fbbe6335789a,Namespace:kube-system,Attempt:0,}" Dec 13 13:59:56.803142 env[1323]: time="2024-12-13T13:59:56.802997331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:59:56.803142 env[1323]: time="2024-12-13T13:59:56.803100737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:59:56.803360 env[1323]: time="2024-12-13T13:59:56.803323318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:59:56.803629 env[1323]: time="2024-12-13T13:59:56.803592000Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391 pid=2286 runtime=io.containerd.runc.v2 Dec 13 13:59:56.849604 kubelet[2178]: E1213 13:59:56.849569 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:56.851589 env[1323]: time="2024-12-13T13:59:56.851551679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57gsj,Uid:48e01787-3252-4084-b35c-be72f664abf4,Namespace:kube-system,Attempt:0,}" Dec 13 13:59:56.855835 kubelet[2178]: E1213 13:59:56.855773 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:56.857757 env[1323]: time="2024-12-13T13:59:56.857709028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mdtn,Uid:cb4ae995-abc2-457d-a5ea-088f4d0ec161,Namespace:kube-system,Attempt:0,}" Dec 13 13:59:56.863211 env[1323]: time="2024-12-13T13:59:56.863163338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rjzgh,Uid:aa7be849-98f2-4a76-b6fb-fbbe6335789a,Namespace:kube-system,Attempt:0,} returns sandbox id \"758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391\"" Dec 13 13:59:56.863941 kubelet[2178]: E1213 13:59:56.863919 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:56.865793 env[1323]: time="2024-12-13T13:59:56.865765716Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:59:56.876144 env[1323]: time="2024-12-13T13:59:56.876034647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:59:56.876144 env[1323]: time="2024-12-13T13:59:56.876084429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:59:56.876144 env[1323]: time="2024-12-13T13:59:56.876095714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:59:56.876288 env[1323]: time="2024-12-13T13:59:56.876221491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed00764b4c41029560fb41642d783ca769d8356159179f64c9d25c129985cad7 pid=2326 runtime=io.containerd.runc.v2 Dec 13 13:59:56.879948 env[1323]: time="2024-12-13T13:59:56.879866902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:59:56.879948 env[1323]: time="2024-12-13T13:59:56.879914644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:59:56.879948 env[1323]: time="2024-12-13T13:59:56.879925409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:59:56.880211 env[1323]: time="2024-12-13T13:59:56.880171840Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719 pid=2342 runtime=io.containerd.runc.v2 Dec 13 13:59:56.930763 env[1323]: time="2024-12-13T13:59:56.930720732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57gsj,Uid:48e01787-3252-4084-b35c-be72f664abf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed00764b4c41029560fb41642d783ca769d8356159179f64c9d25c129985cad7\"" Dec 13 13:59:56.931731 kubelet[2178]: E1213 13:59:56.931621 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:56.932811 env[1323]: time="2024-12-13T13:59:56.932630757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mdtn,Uid:cb4ae995-abc2-457d-a5ea-088f4d0ec161,Namespace:kube-system,Attempt:0,} returns sandbox id \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\"" Dec 13 13:59:56.934250 kubelet[2178]: E1213 13:59:56.933722 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:56.937067 env[1323]: time="2024-12-13T13:59:56.937033951Z" level=info msg="CreateContainer within sandbox \"ed00764b4c41029560fb41642d783ca769d8356159179f64c9d25c129985cad7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:59:56.956715 env[1323]: time="2024-12-13T13:59:56.956672725Z" level=info msg="CreateContainer within sandbox \"ed00764b4c41029560fb41642d783ca769d8356159179f64c9d25c129985cad7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be03c29f158a838419dada06edd68ede90e6a24d38ea53463e927263c60d01b1\"" Dec 13 13:59:56.957404 env[1323]: time="2024-12-13T13:59:56.957367720Z" level=info msg="StartContainer for \"be03c29f158a838419dada06edd68ede90e6a24d38ea53463e927263c60d01b1\"" Dec 13 13:59:57.009193 env[1323]: time="2024-12-13T13:59:57.009142586Z" level=info msg="StartContainer for \"be03c29f158a838419dada06edd68ede90e6a24d38ea53463e927263c60d01b1\" returns successfully" Dec 13 13:59:57.566940 kubelet[2178]: E1213 13:59:57.566890 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:59:57.577238 kubelet[2178]: I1213 13:59:57.577190 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-57gsj" podStartSLOduration=2.577153447 podStartE2EDuration="2.577153447s" podCreationTimestamp="2024-12-13 13:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:59:57.576983293 +0000 UTC m=+16.163994663" watchObservedRunningTime="2024-12-13 13:59:57.577153447 +0000 UTC m=+16.164164817" Dec 13 14:00:00.978291 env[1323]: time="2024-12-13T14:00:00.978248355Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:00.980123 env[1323]: time="2024-12-13T14:00:00.980074285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:00.981356 env[1323]: time="2024-12-13T14:00:00.981323157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:00.981908 env[1323]: time="2024-12-13T14:00:00.981862201Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:00:00.983519 env[1323]: time="2024-12-13T14:00:00.983485374Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:00:00.985512 env[1323]: time="2024-12-13T14:00:00.985427668Z" level=info msg="CreateContainer within sandbox \"758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:00:01.005069 env[1323]: time="2024-12-13T14:00:01.005034350Z" level=info msg="CreateContainer within sandbox \"758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\"" Dec 13 14:00:01.006527 env[1323]: time="2024-12-13T14:00:01.005591872Z" level=info msg="StartContainer for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\"" Dec 13 14:00:01.069006 env[1323]: time="2024-12-13T14:00:01.068960853Z" level=info msg="StartContainer for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" returns successfully" Dec 13 14:00:01.602941 kubelet[2178]: E1213 14:00:01.602901 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:01.631091 kubelet[2178]: I1213 14:00:01.630998 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-rjzgh" podStartSLOduration=1.5134126270000001 podStartE2EDuration="5.63095931s" podCreationTimestamp="2024-12-13 13:59:56 +0000 UTC" firstStartedPulling="2024-12-13 13:59:56.865097974 +0000 UTC m=+15.452109344" lastFinishedPulling="2024-12-13 14:00:00.982644657 +0000 UTC m=+19.569656027" observedRunningTime="2024-12-13 14:00:01.630577532 +0000 UTC m=+20.217588902" watchObservedRunningTime="2024-12-13 14:00:01.63095931 +0000 UTC m=+20.217970680" Dec 13 14:00:02.606056 kubelet[2178]: E1213 14:00:02.606004 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:10.038723 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:39840.service. Dec 13 14:00:10.102187 sshd[2594]: Accepted publickey for core from 10.0.0.1 port 39840 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:10.103530 sshd[2594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:10.106872 systemd-logind[1308]: New session 6 of user core. Dec 13 14:00:10.107668 systemd[1]: Started session-6.scope. Dec 13 14:00:10.242665 sshd[2594]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:10.245678 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:39840.service: Deactivated successfully. Dec 13 14:00:10.246553 systemd-logind[1308]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:00:10.246612 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:00:10.247279 systemd-logind[1308]: Removed session 6. Dec 13 14:00:10.938142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882057813.mount: Deactivated successfully. Dec 13 14:00:15.246333 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:49808.service. Dec 13 14:00:15.284321 sshd[2609]: Accepted publickey for core from 10.0.0.1 port 49808 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:15.286091 sshd[2609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:15.289861 systemd-logind[1308]: New session 7 of user core. Dec 13 14:00:15.290585 systemd[1]: Started session-7.scope. Dec 13 14:00:15.412216 sshd[2609]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:15.415162 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:49808.service: Deactivated successfully. Dec 13 14:00:15.416054 systemd-logind[1308]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:00:15.416113 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:00:15.416861 systemd-logind[1308]: Removed session 7. Dec 13 14:00:20.418677 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:49810.service. Dec 13 14:00:20.458317 sshd[2624]: Accepted publickey for core from 10.0.0.1 port 49810 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:20.459557 sshd[2624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:20.462893 systemd-logind[1308]: New session 8 of user core. Dec 13 14:00:20.463694 systemd[1]: Started session-8.scope. Dec 13 14:00:20.573060 sshd[2624]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:20.575594 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:49810.service: Deactivated successfully. Dec 13 14:00:20.576508 systemd-logind[1308]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:00:20.576528 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:00:20.577141 systemd-logind[1308]: Removed session 8. Dec 13 14:00:25.577234 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:56206.service. Dec 13 14:00:25.618070 sshd[2639]: Accepted publickey for core from 10.0.0.1 port 56206 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:25.619735 sshd[2639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:25.624054 systemd-logind[1308]: New session 9 of user core. Dec 13 14:00:25.624866 systemd[1]: Started session-9.scope. Dec 13 14:00:25.748624 sshd[2639]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:25.751416 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:56206.service: Deactivated successfully. Dec 13 14:00:25.752279 systemd-logind[1308]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:00:25.752334 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:00:25.753019 systemd-logind[1308]: Removed session 9. Dec 13 14:00:26.569685 env[1323]: time="2024-12-13T14:00:26.569634954Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:26.571370 env[1323]: time="2024-12-13T14:00:26.571327523Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:26.572946 env[1323]: time="2024-12-13T14:00:26.572916195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:26.573686 env[1323]: time="2024-12-13T14:00:26.573658842Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:00:26.576820 env[1323]: time="2024-12-13T14:00:26.576168351Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:00:26.585472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959935984.mount: Deactivated successfully. Dec 13 14:00:26.593306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465272844.mount: Deactivated successfully. Dec 13 14:00:26.595774 env[1323]: time="2024-12-13T14:00:26.595731135Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\"" Dec 13 14:00:26.596289 env[1323]: time="2024-12-13T14:00:26.596260946Z" level=info msg="StartContainer for \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\"" Dec 13 14:00:26.700446 env[1323]: time="2024-12-13T14:00:26.700393308Z" level=info msg="StartContainer for \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\" returns successfully" Dec 13 14:00:26.840568 env[1323]: time="2024-12-13T14:00:26.840455893Z" level=info msg="shim disconnected" id=344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508 Dec 13 14:00:26.840568 env[1323]: time="2024-12-13T14:00:26.840505741Z" level=warning msg="cleaning up after shim disconnected" id=344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508 namespace=k8s.io Dec 13 14:00:26.840568 env[1323]: time="2024-12-13T14:00:26.840514543Z" level=info msg="cleaning up dead shim" Dec 13 14:00:26.847547 env[1323]: time="2024-12-13T14:00:26.847504458Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2704 runtime=io.containerd.runc.v2\n" Dec 13 14:00:27.583390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508-rootfs.mount: Deactivated successfully. Dec 13 14:00:27.662146 kubelet[2178]: E1213 14:00:27.660439 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:27.663551 env[1323]: time="2024-12-13T14:00:27.663493467Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:00:27.678161 env[1323]: time="2024-12-13T14:00:27.678105162Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\"" Dec 13 14:00:27.679068 env[1323]: time="2024-12-13T14:00:27.678618528Z" level=info msg="StartContainer for \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\"" Dec 13 14:00:27.734882 env[1323]: time="2024-12-13T14:00:27.734821649Z" level=info msg="StartContainer for \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\" returns successfully" Dec 13 14:00:27.761503 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:00:27.761752 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:00:27.761913 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:00:27.763462 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:00:27.771068 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:00:27.786369 env[1323]: time="2024-12-13T14:00:27.786313699Z" level=info msg="shim disconnected" id=badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757 Dec 13 14:00:27.786369 env[1323]: time="2024-12-13T14:00:27.786356706Z" level=warning msg="cleaning up after shim disconnected" id=badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757 namespace=k8s.io Dec 13 14:00:27.786369 env[1323]: time="2024-12-13T14:00:27.786365708Z" level=info msg="cleaning up dead shim" Dec 13 14:00:27.793932 env[1323]: time="2024-12-13T14:00:27.793882651Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2771 runtime=io.containerd.runc.v2\n" Dec 13 14:00:28.583283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757-rootfs.mount: Deactivated successfully. Dec 13 14:00:28.663248 kubelet[2178]: E1213 14:00:28.663222 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:28.666618 env[1323]: time="2024-12-13T14:00:28.666576596Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:00:28.682517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875625940.mount: Deactivated successfully. Dec 13 14:00:28.684026 env[1323]: time="2024-12-13T14:00:28.683985672Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\"" Dec 13 14:00:28.684515 env[1323]: time="2024-12-13T14:00:28.684489356Z" level=info msg="StartContainer for \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\"" Dec 13 14:00:28.741271 env[1323]: time="2024-12-13T14:00:28.741230649Z" level=info msg="StartContainer for \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\" returns successfully" Dec 13 14:00:28.767125 env[1323]: time="2024-12-13T14:00:28.767075479Z" level=info msg="shim disconnected" id=139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22 Dec 13 14:00:28.767125 env[1323]: time="2024-12-13T14:00:28.767125207Z" level=warning msg="cleaning up after shim disconnected" id=139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22 namespace=k8s.io Dec 13 14:00:28.767348 env[1323]: time="2024-12-13T14:00:28.767136129Z" level=info msg="cleaning up dead shim" Dec 13 14:00:28.773879 env[1323]: time="2024-12-13T14:00:28.773843877Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2828 runtime=io.containerd.runc.v2\n" Dec 13 14:00:29.583302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22-rootfs.mount: Deactivated successfully. Dec 13 14:00:29.666579 kubelet[2178]: E1213 14:00:29.666549 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:29.669989 env[1323]: time="2024-12-13T14:00:29.669882797Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:00:29.680737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26017753.mount: Deactivated successfully. Dec 13 14:00:29.692003 env[1323]: time="2024-12-13T14:00:29.691951665Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\"" Dec 13 14:00:29.693742 env[1323]: time="2024-12-13T14:00:29.692507316Z" level=info msg="StartContainer for \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\"" Dec 13 14:00:29.741029 env[1323]: time="2024-12-13T14:00:29.740624739Z" level=info msg="StartContainer for \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\" returns successfully" Dec 13 14:00:29.760408 env[1323]: time="2024-12-13T14:00:29.760335744Z" level=info msg="shim disconnected" id=3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b Dec 13 14:00:29.760408 env[1323]: time="2024-12-13T14:00:29.760406796Z" level=warning msg="cleaning up after shim disconnected" id=3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b namespace=k8s.io Dec 13 14:00:29.760408 env[1323]: time="2024-12-13T14:00:29.760416837Z" level=info msg="cleaning up dead shim" Dec 13 14:00:29.767849 env[1323]: time="2024-12-13T14:00:29.767797477Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2883 runtime=io.containerd.runc.v2\n" Dec 13 14:00:30.672898 kubelet[2178]: E1213 14:00:30.672869 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:30.677348 env[1323]: time="2024-12-13T14:00:30.677309300Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:00:30.688439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713963803.mount: Deactivated successfully. Dec 13 14:00:30.693112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654485543.mount: Deactivated successfully. Dec 13 14:00:30.694849 env[1323]: time="2024-12-13T14:00:30.694813743Z" level=info msg="CreateContainer within sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\"" Dec 13 14:00:30.696485 env[1323]: time="2024-12-13T14:00:30.696442364Z" level=info msg="StartContainer for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\"" Dec 13 14:00:30.738543 env[1323]: time="2024-12-13T14:00:30.738483937Z" level=info msg="StartContainer for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" returns successfully" Dec 13 14:00:30.752761 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:56210.service. Dec 13 14:00:30.799767 sshd[2938]: Accepted publickey for core from 10.0.0.1 port 56210 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:30.802150 sshd[2938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:30.812719 systemd-logind[1308]: New session 10 of user core. Dec 13 14:00:30.812926 systemd[1]: Started session-10.scope. Dec 13 14:00:30.889691 kubelet[2178]: I1213 14:00:30.888961 2178 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:00:30.950341 sshd[2938]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:30.953714 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:56218.service. Dec 13 14:00:30.954970 systemd-logind[1308]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:00:30.955118 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:56210.service: Deactivated successfully. Dec 13 14:00:30.955868 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:00:30.956264 systemd-logind[1308]: Removed session 10. Dec 13 14:00:30.990406 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:00:30.991839 sshd[2981]: Accepted publickey for core from 10.0.0.1 port 56218 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:30.993546 sshd[2981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:30.997303 systemd-logind[1308]: New session 11 of user core. Dec 13 14:00:30.998230 systemd[1]: Started session-11.scope. Dec 13 14:00:31.163519 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:56220.service. Dec 13 14:00:31.164946 sshd[2981]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:31.180124 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:56218.service: Deactivated successfully. Dec 13 14:00:31.181218 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:00:31.182442 systemd-logind[1308]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:00:31.194729 systemd-logind[1308]: Removed session 11. Dec 13 14:00:31.237749 sshd[3014]: Accepted publickey for core from 10.0.0.1 port 56220 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:31.239441 sshd[3014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:31.243286 systemd-logind[1308]: New session 12 of user core. Dec 13 14:00:31.244070 systemd[1]: Started session-12.scope. Dec 13 14:00:31.270405 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:00:31.377612 sshd[3014]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:31.380277 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:56220.service: Deactivated successfully. Dec 13 14:00:31.381506 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:00:31.382037 systemd-logind[1308]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:00:31.382943 systemd-logind[1308]: Removed session 12. Dec 13 14:00:31.676839 kubelet[2178]: E1213 14:00:31.676802 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:32.678712 kubelet[2178]: E1213 14:00:32.678676 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:32.881586 systemd-networkd[1101]: cilium_host: Link UP Dec 13 14:00:32.881697 systemd-networkd[1101]: cilium_net: Link UP Dec 13 14:00:32.882410 systemd-networkd[1101]: cilium_net: Gained carrier Dec 13 14:00:32.885409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:00:32.885520 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:00:32.883261 systemd-networkd[1101]: cilium_host: Gained carrier Dec 13 14:00:32.967168 systemd-networkd[1101]: cilium_vxlan: Link UP Dec 13 14:00:32.967175 systemd-networkd[1101]: cilium_vxlan: Gained carrier Dec 13 14:00:33.012523 systemd-networkd[1101]: cilium_net: Gained IPv6LL Dec 13 14:00:33.269400 kernel: NET: Registered PF_ALG protocol family Dec 13 14:00:33.680213 kubelet[2178]: E1213 14:00:33.680117 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:33.871034 systemd-networkd[1101]: lxc_health: Link UP Dec 13 14:00:33.879251 systemd-networkd[1101]: lxc_health: Gained carrier Dec 13 14:00:33.879395 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:00:33.900507 systemd-networkd[1101]: cilium_host: Gained IPv6LL Dec 13 14:00:34.540621 systemd-networkd[1101]: cilium_vxlan: Gained IPv6LL Dec 13 14:00:34.858472 kubelet[2178]: E1213 14:00:34.858174 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:34.872796 kubelet[2178]: I1213 14:00:34.872746 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4mdtn" podStartSLOduration=10.233527596 podStartE2EDuration="39.872709307s" podCreationTimestamp="2024-12-13 13:59:55 +0000 UTC" firstStartedPulling="2024-12-13 13:59:56.935246222 +0000 UTC m=+15.522257552" lastFinishedPulling="2024-12-13 14:00:26.574427893 +0000 UTC m=+45.161439263" observedRunningTime="2024-12-13 14:00:31.691068502 +0000 UTC m=+50.278079872" watchObservedRunningTime="2024-12-13 14:00:34.872709307 +0000 UTC m=+53.459720677" Dec 13 14:00:35.684170 kubelet[2178]: E1213 14:00:35.684140 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:35.884586 systemd-networkd[1101]: lxc_health: Gained IPv6LL Dec 13 14:00:36.381598 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:38648.service. Dec 13 14:00:36.422496 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 38648 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:36.423946 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:36.427428 systemd-logind[1308]: New session 13 of user core. Dec 13 14:00:36.428413 systemd[1]: Started session-13.scope. Dec 13 14:00:36.550012 sshd[3442]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:36.552312 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:38648.service: Deactivated successfully. Dec 13 14:00:36.553277 systemd-logind[1308]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:00:36.553313 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:00:36.554065 systemd-logind[1308]: Removed session 13. Dec 13 14:00:41.555214 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:38650.service. Dec 13 14:00:41.591945 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 38650 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:41.593054 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:41.596981 systemd[1]: Started session-14.scope. Dec 13 14:00:41.597189 systemd-logind[1308]: New session 14 of user core. Dec 13 14:00:41.714696 sshd[3463]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:41.717012 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:38662.service. Dec 13 14:00:41.719089 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:38650.service: Deactivated successfully. Dec 13 14:00:41.719908 systemd-logind[1308]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:00:41.719978 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:00:41.720703 systemd-logind[1308]: Removed session 14. Dec 13 14:00:41.752577 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 38662 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:41.753638 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:41.756875 systemd-logind[1308]: New session 15 of user core. Dec 13 14:00:41.757636 systemd[1]: Started session-15.scope. Dec 13 14:00:41.954989 sshd[3475]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:41.957044 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:38676.service. Dec 13 14:00:41.957899 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:38662.service: Deactivated successfully. Dec 13 14:00:41.958750 systemd-logind[1308]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:00:41.958792 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:00:41.961678 systemd-logind[1308]: Removed session 15. Dec 13 14:00:41.994219 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 38676 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:41.995372 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:41.999245 systemd[1]: Started session-16.scope. Dec 13 14:00:42.000151 systemd-logind[1308]: New session 16 of user core. Dec 13 14:00:43.249168 sshd[3488]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:43.251287 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:50226.service. Dec 13 14:00:43.256828 systemd-logind[1308]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:00:43.256931 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:38676.service: Deactivated successfully. Dec 13 14:00:43.257718 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:00:43.258160 systemd-logind[1308]: Removed session 16. Dec 13 14:00:43.289913 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:43.291155 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:43.294402 systemd-logind[1308]: New session 17 of user core. Dec 13 14:00:43.295073 systemd[1]: Started session-17.scope. Dec 13 14:00:43.521452 sshd[3506]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:43.523293 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:50236.service. Dec 13 14:00:43.527598 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:50226.service: Deactivated successfully. Dec 13 14:00:43.530575 systemd-logind[1308]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:00:43.530699 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:00:43.533275 systemd-logind[1308]: Removed session 17. Dec 13 14:00:43.562093 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 50236 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:43.563287 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:43.568336 systemd-logind[1308]: New session 18 of user core. Dec 13 14:00:43.569141 systemd[1]: Started session-18.scope. Dec 13 14:00:43.684274 sshd[3520]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:43.688688 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:50236.service: Deactivated successfully. Dec 13 14:00:43.689702 systemd-logind[1308]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:00:43.689745 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:00:43.690630 systemd-logind[1308]: Removed session 18. Dec 13 14:00:48.686815 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:50244.service. Dec 13 14:00:48.721336 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 50244 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:48.722774 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:48.726513 systemd-logind[1308]: New session 19 of user core. Dec 13 14:00:48.726613 systemd[1]: Started session-19.scope. Dec 13 14:00:48.829628 sshd[3541]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:48.831957 systemd-logind[1308]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:00:48.832147 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:50244.service: Deactivated successfully. Dec 13 14:00:48.833002 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:00:48.833363 systemd-logind[1308]: Removed session 19. Dec 13 14:00:53.832617 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:56626.service. Dec 13 14:00:53.867523 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 56626 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:53.868708 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:53.871751 systemd-logind[1308]: New session 20 of user core. Dec 13 14:00:53.872540 systemd[1]: Started session-20.scope. Dec 13 14:00:53.977306 sshd[3558]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:53.979932 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:56626.service: Deactivated successfully. Dec 13 14:00:53.980918 systemd-logind[1308]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:00:53.980947 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:00:53.981760 systemd-logind[1308]: Removed session 20. Dec 13 14:00:58.513624 kubelet[2178]: E1213 14:00:58.513568 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:58.979547 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:56632.service. Dec 13 14:00:59.015033 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 56632 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:59.016200 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:59.019454 systemd-logind[1308]: New session 21 of user core. Dec 13 14:00:59.020349 systemd[1]: Started session-21.scope. Dec 13 14:00:59.126039 sshd[3574]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:59.128802 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:56632.service: Deactivated successfully. Dec 13 14:00:59.129880 systemd-logind[1308]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:00:59.129921 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:00:59.130664 systemd-logind[1308]: Removed session 21. Dec 13 14:01:00.512973 kubelet[2178]: E1213 14:01:00.512935 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:04.129495 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:40358.service. Dec 13 14:01:04.164623 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 40358 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:01:04.166237 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:01:04.172929 systemd-logind[1308]: New session 22 of user core. Dec 13 14:01:04.174250 systemd[1]: Started session-22.scope. Dec 13 14:01:04.285179 sshd[3588]: pam_unix(sshd:session): session closed for user core Dec 13 14:01:04.286716 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:40360.service. Dec 13 14:01:04.290174 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:40358.service: Deactivated successfully. Dec 13 14:01:04.291148 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:01:04.291161 systemd-logind[1308]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:01:04.292115 systemd-logind[1308]: Removed session 22. Dec 13 14:01:04.322247 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 40360 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:01:04.323842 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:01:04.327906 systemd-logind[1308]: New session 23 of user core. Dec 13 14:01:04.328365 systemd[1]: Started session-23.scope. Dec 13 14:01:06.089438 env[1323]: time="2024-12-13T14:01:06.089396826Z" level=info msg="StopContainer for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" with timeout 30 (s)" Dec 13 14:01:06.090120 env[1323]: time="2024-12-13T14:01:06.090091769Z" level=info msg="Stop container \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" with signal terminated" Dec 13 14:01:06.102827 systemd[1]: run-containerd-runc-k8s.io-72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd-runc.3uJvho.mount: Deactivated successfully. Dec 13 14:01:06.126020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9-rootfs.mount: Deactivated successfully. Dec 13 14:01:06.134981 env[1323]: time="2024-12-13T14:01:06.134935715Z" level=info msg="shim disconnected" id=c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9 Dec 13 14:01:06.135276 env[1323]: time="2024-12-13T14:01:06.135256188Z" level=warning msg="cleaning up after shim disconnected" id=c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9 namespace=k8s.io Dec 13 14:01:06.135361 env[1323]: time="2024-12-13T14:01:06.135346865Z" level=info msg="cleaning up dead shim" Dec 13 14:01:06.141617 env[1323]: time="2024-12-13T14:01:06.141568914Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:01:06.144581 env[1323]: time="2024-12-13T14:01:06.144546281Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3649 runtime=io.containerd.runc.v2\n" Dec 13 14:01:06.146684 env[1323]: time="2024-12-13T14:01:06.146657470Z" level=info msg="StopContainer for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" with timeout 2 (s)" Dec 13 14:01:06.147833 env[1323]: time="2024-12-13T14:01:06.147804842Z" level=info msg="Stop container \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" with signal terminated" Dec 13 14:01:06.148264 env[1323]: time="2024-12-13T14:01:06.148232111Z" level=info msg="StopContainer for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" returns successfully" Dec 13 14:01:06.148835 env[1323]: time="2024-12-13T14:01:06.148802177Z" level=info msg="StopPodSandbox for \"758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391\"" Dec 13 14:01:06.148923 env[1323]: time="2024-12-13T14:01:06.148858216Z" level=info msg="Container to stop \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.150671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391-shm.mount: Deactivated successfully. Dec 13 14:01:06.154849 systemd-networkd[1101]: lxc_health: Link DOWN Dec 13 14:01:06.154855 systemd-networkd[1101]: lxc_health: Lost carrier Dec 13 14:01:06.175573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391-rootfs.mount: Deactivated successfully. Dec 13 14:01:06.181357 env[1323]: time="2024-12-13T14:01:06.181301865Z" level=info msg="shim disconnected" id=758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391 Dec 13 14:01:06.182078 env[1323]: time="2024-12-13T14:01:06.182052847Z" level=warning msg="cleaning up after shim disconnected" id=758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391 namespace=k8s.io Dec 13 14:01:06.182173 env[1323]: time="2024-12-13T14:01:06.182159484Z" level=info msg="cleaning up dead shim" Dec 13 14:01:06.190908 env[1323]: time="2024-12-13T14:01:06.190873152Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3694 runtime=io.containerd.runc.v2\n" Dec 13 14:01:06.191340 env[1323]: time="2024-12-13T14:01:06.191294342Z" level=info msg="TearDown network for sandbox \"758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391\" successfully" Dec 13 14:01:06.191479 env[1323]: time="2024-12-13T14:01:06.191456658Z" level=info msg="StopPodSandbox for \"758bcbd66a71dd4efcfae7b0049b49ce928e22690da7ee5929cb452044121391\" returns successfully" Dec 13 14:01:06.219232 env[1323]: time="2024-12-13T14:01:06.219188022Z" level=info msg="shim disconnected" id=72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd Dec 13 14:01:06.219496 env[1323]: time="2024-12-13T14:01:06.219475495Z" level=warning msg="cleaning up after shim disconnected" id=72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd namespace=k8s.io Dec 13 14:01:06.219567 env[1323]: time="2024-12-13T14:01:06.219554213Z" level=info msg="cleaning up dead shim" Dec 13 14:01:06.226064 env[1323]: time="2024-12-13T14:01:06.226030175Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3719 runtime=io.containerd.runc.v2\n" Dec 13 14:01:06.227857 env[1323]: time="2024-12-13T14:01:06.227821571Z" level=info msg="StopContainer for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" returns successfully" Dec 13 14:01:06.228430 env[1323]: time="2024-12-13T14:01:06.228402997Z" level=info msg="StopPodSandbox for \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\"" Dec 13 14:01:06.228495 env[1323]: time="2024-12-13T14:01:06.228464275Z" level=info msg="Container to stop \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.228495 env[1323]: time="2024-12-13T14:01:06.228479995Z" level=info msg="Container to stop \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.228554 env[1323]: time="2024-12-13T14:01:06.228491755Z" level=info msg="Container to stop \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.228554 env[1323]: time="2024-12-13T14:01:06.228503474Z" level=info msg="Container to stop \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.228554 env[1323]: time="2024-12-13T14:01:06.228513994Z" level=info msg="Container to stop \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.234647 kubelet[2178]: I1213 14:01:06.234608 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwtnd\" (UniqueName: \"kubernetes.io/projected/aa7be849-98f2-4a76-b6fb-fbbe6335789a-kube-api-access-zwtnd\") pod \"aa7be849-98f2-4a76-b6fb-fbbe6335789a\" (UID: \"aa7be849-98f2-4a76-b6fb-fbbe6335789a\") " Dec 13 14:01:06.234999 kubelet[2178]: I1213 14:01:06.234934 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa7be849-98f2-4a76-b6fb-fbbe6335789a-cilium-config-path\") pod \"aa7be849-98f2-4a76-b6fb-fbbe6335789a\" (UID: \"aa7be849-98f2-4a76-b6fb-fbbe6335789a\") " Dec 13 14:01:06.236858 kubelet[2178]: I1213 14:01:06.236820 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7be849-98f2-4a76-b6fb-fbbe6335789a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa7be849-98f2-4a76-b6fb-fbbe6335789a" (UID: "aa7be849-98f2-4a76-b6fb-fbbe6335789a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:01:06.240130 kubelet[2178]: I1213 14:01:06.240082 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7be849-98f2-4a76-b6fb-fbbe6335789a-kube-api-access-zwtnd" (OuterVolumeSpecName: "kube-api-access-zwtnd") pod "aa7be849-98f2-4a76-b6fb-fbbe6335789a" (UID: "aa7be849-98f2-4a76-b6fb-fbbe6335789a"). InnerVolumeSpecName "kube-api-access-zwtnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:06.257096 env[1323]: time="2024-12-13T14:01:06.257042219Z" level=info msg="shim disconnected" id=02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719 Dec 13 14:01:06.257942 env[1323]: time="2024-12-13T14:01:06.257912758Z" level=warning msg="cleaning up after shim disconnected" id=02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719 namespace=k8s.io Dec 13 14:01:06.258037 env[1323]: time="2024-12-13T14:01:06.258022715Z" level=info msg="cleaning up dead shim" Dec 13 14:01:06.265100 env[1323]: time="2024-12-13T14:01:06.265067743Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3753 runtime=io.containerd.runc.v2\n" Dec 13 14:01:06.265530 env[1323]: time="2024-12-13T14:01:06.265497693Z" level=info msg="TearDown network for sandbox \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" successfully" Dec 13 14:01:06.265633 env[1323]: time="2024-12-13T14:01:06.265615450Z" level=info msg="StopPodSandbox for \"02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719\" returns successfully" Dec 13 14:01:06.336779 kubelet[2178]: I1213 14:01:06.336576 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-lib-modules\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.336779 kubelet[2178]: I1213 14:01:06.336617 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cni-path\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.336779 kubelet[2178]: I1213 14:01:06.336638 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-kernel\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.336779 kubelet[2178]: I1213 14:01:06.336655 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hostproc\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.336779 kubelet[2178]: I1213 14:01:06.336664 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.336779 kubelet[2178]: I1213 14:01:06.336735 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hostproc" (OuterVolumeSpecName: "hostproc") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.337057 kubelet[2178]: I1213 14:01:06.336679 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjbvd\" (UniqueName: \"kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-kube-api-access-gjbvd\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.337057 kubelet[2178]: I1213 14:01:06.336806 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-bpf-maps\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.337057 kubelet[2178]: I1213 14:01:06.336708 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.337057 kubelet[2178]: I1213 14:01:06.336727 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cni-path" (OuterVolumeSpecName: "cni-path") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.337057 kubelet[2178]: I1213 14:01:06.336852 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-cgroup\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.337170 kubelet[2178]: I1213 14:01:06.336875 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-config-path\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.337170 kubelet[2178]: I1213 14:01:06.336912 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.337170 kubelet[2178]: I1213 14:01:06.336930 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.337385 kubelet[2178]: I1213 14:01:06.337342 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.338987 kubelet[2178]: I1213 14:01:06.337463 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-etc-cni-netd\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.338987 kubelet[2178]: I1213 14:01:06.337507 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-net\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.338987 kubelet[2178]: I1213 14:01:06.337527 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-xtables-lock\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.338987 kubelet[2178]: I1213 14:01:06.337584 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.338987 kubelet[2178]: I1213 14:01:06.337605 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb4ae995-abc2-457d-a5ea-088f4d0ec161-clustermesh-secrets\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.339161 kubelet[2178]: I1213 14:01:06.337603 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.339161 kubelet[2178]: I1213 14:01:06.337625 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-run\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.339161 kubelet[2178]: I1213 14:01:06.337645 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hubble-tls\") pod \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\" (UID: \"cb4ae995-abc2-457d-a5ea-088f4d0ec161\") " Dec 13 14:01:06.339161 kubelet[2178]: I1213 14:01:06.337671 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.339161 kubelet[2178]: I1213 14:01:06.337709 2178 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339161 kubelet[2178]: I1213 14:01:06.337721 2178 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.337731 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.337953 2178 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zwtnd\" (UniqueName: \"kubernetes.io/projected/aa7be849-98f2-4a76-b6fb-fbbe6335789a-kube-api-access-zwtnd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.337970 2178 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.337980 2178 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.337989 2178 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.338000 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.338010 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa7be849-98f2-4a76-b6fb-fbbe6335789a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339295 kubelet[2178]: I1213 14:01:06.338019 2178 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339547 kubelet[2178]: I1213 14:01:06.338028 2178 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339547 kubelet[2178]: I1213 14:01:06.338036 2178 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.339547 kubelet[2178]: I1213 14:01:06.339079 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:01:06.340354 kubelet[2178]: I1213 14:01:06.340329 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-kube-api-access-gjbvd" (OuterVolumeSpecName: "kube-api-access-gjbvd") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "kube-api-access-gjbvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:06.341093 kubelet[2178]: I1213 14:01:06.341060 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb4ae995-abc2-457d-a5ea-088f4d0ec161-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:01:06.341203 kubelet[2178]: I1213 14:01:06.341185 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cb4ae995-abc2-457d-a5ea-088f4d0ec161" (UID: "cb4ae995-abc2-457d-a5ea-088f4d0ec161"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:06.438486 kubelet[2178]: I1213 14:01:06.438446 2178 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb4ae995-abc2-457d-a5ea-088f4d0ec161-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.438486 kubelet[2178]: I1213 14:01:06.438484 2178 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.438631 kubelet[2178]: I1213 14:01:06.438499 2178 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gjbvd\" (UniqueName: \"kubernetes.io/projected/cb4ae995-abc2-457d-a5ea-088f4d0ec161-kube-api-access-gjbvd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.438631 kubelet[2178]: I1213 14:01:06.438512 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb4ae995-abc2-457d-a5ea-088f4d0ec161-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:06.583919 kubelet[2178]: E1213 14:01:06.583896 2178 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:01:06.740851 kubelet[2178]: I1213 14:01:06.740460 2178 scope.go:117] "RemoveContainer" containerID="72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd" Dec 13 14:01:06.742259 env[1323]: time="2024-12-13T14:01:06.742223751Z" level=info msg="RemoveContainer for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\"" Dec 13 14:01:06.746976 env[1323]: time="2024-12-13T14:01:06.746940636Z" level=info msg="RemoveContainer for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" returns successfully" Dec 13 14:01:06.747296 kubelet[2178]: I1213 14:01:06.747277 2178 scope.go:117] "RemoveContainer" containerID="3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b" Dec 13 14:01:06.748474 env[1323]: time="2024-12-13T14:01:06.748445880Z" level=info msg="RemoveContainer for \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\"" Dec 13 14:01:06.752091 env[1323]: time="2024-12-13T14:01:06.752060951Z" level=info msg="RemoveContainer for \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\" returns successfully" Dec 13 14:01:06.752476 kubelet[2178]: I1213 14:01:06.752453 2178 scope.go:117] "RemoveContainer" containerID="139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22" Dec 13 14:01:06.754197 env[1323]: time="2024-12-13T14:01:06.754171740Z" level=info msg="RemoveContainer for \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\"" Dec 13 14:01:06.757036 env[1323]: time="2024-12-13T14:01:06.757008351Z" level=info msg="RemoveContainer for \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\" returns successfully" Dec 13 14:01:06.757327 kubelet[2178]: I1213 14:01:06.757238 2178 scope.go:117] "RemoveContainer" containerID="badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757" Dec 13 14:01:06.759621 env[1323]: time="2024-12-13T14:01:06.759594528Z" level=info msg="RemoveContainer for \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\"" Dec 13 14:01:06.761846 env[1323]: time="2024-12-13T14:01:06.761818314Z" level=info msg="RemoveContainer for \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\" returns successfully" Dec 13 14:01:06.762143 kubelet[2178]: I1213 14:01:06.762052 2178 scope.go:117] "RemoveContainer" containerID="344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508" Dec 13 14:01:06.762948 env[1323]: time="2024-12-13T14:01:06.762865328Z" level=info msg="RemoveContainer for \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\"" Dec 13 14:01:06.765005 env[1323]: time="2024-12-13T14:01:06.764973917Z" level=info msg="RemoveContainer for \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\" returns successfully" Dec 13 14:01:06.765285 kubelet[2178]: I1213 14:01:06.765193 2178 scope.go:117] "RemoveContainer" containerID="72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd" Dec 13 14:01:06.765437 env[1323]: time="2024-12-13T14:01:06.765342108Z" level=error msg="ContainerStatus for \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\": not found" Dec 13 14:01:06.765774 kubelet[2178]: E1213 14:01:06.765593 2178 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\": not found" containerID="72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd" Dec 13 14:01:06.765774 kubelet[2178]: I1213 14:01:06.765677 2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd"} err="failed to get container status \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd\": not found" Dec 13 14:01:06.765774 kubelet[2178]: I1213 14:01:06.765691 2178 scope.go:117] "RemoveContainer" containerID="3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b" Dec 13 14:01:06.765888 env[1323]: time="2024-12-13T14:01:06.765809296Z" level=error msg="ContainerStatus for \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\": not found" Dec 13 14:01:06.766118 kubelet[2178]: E1213 14:01:06.766008 2178 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\": not found" containerID="3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b" Dec 13 14:01:06.766118 kubelet[2178]: I1213 14:01:06.766034 2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b"} err="failed to get container status \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f0e80ee60130a2dd470a861bc449379d3aa024d364804d051dbfb10f5340b4b\": not found" Dec 13 14:01:06.766118 kubelet[2178]: I1213 14:01:06.766044 2178 scope.go:117] "RemoveContainer" containerID="139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22" Dec 13 14:01:06.766226 env[1323]: time="2024-12-13T14:01:06.766157888Z" level=error msg="ContainerStatus for \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\": not found" Dec 13 14:01:06.766486 kubelet[2178]: E1213 14:01:06.766352 2178 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\": not found" containerID="139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22" Dec 13 14:01:06.766486 kubelet[2178]: I1213 14:01:06.766401 2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22"} err="failed to get container status \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\": rpc error: code = NotFound desc = an error occurred when try to find container \"139ecb9aa4bd9ab1a5cb4d93c35165920413f40659e8e47b1b881f54eafc6f22\": not found" Dec 13 14:01:06.766486 kubelet[2178]: I1213 14:01:06.766411 2178 scope.go:117] "RemoveContainer" containerID="badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757" Dec 13 14:01:06.766600 env[1323]: time="2024-12-13T14:01:06.766549758Z" level=error msg="ContainerStatus for \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\": not found" Dec 13 14:01:06.766825 kubelet[2178]: E1213 14:01:06.766714 2178 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\": not found" containerID="badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757" Dec 13 14:01:06.766825 kubelet[2178]: I1213 14:01:06.766739 2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757"} err="failed to get container status \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\": rpc error: code = NotFound desc = an error occurred when try to find container \"badec54f5a3d74f655078865a62834580c3011d57db26762d2c86f1488f56757\": not found" Dec 13 14:01:06.766825 kubelet[2178]: I1213 14:01:06.766748 2178 scope.go:117] "RemoveContainer" containerID="344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508" Dec 13 14:01:06.766925 env[1323]: time="2024-12-13T14:01:06.766869750Z" level=error msg="ContainerStatus for \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\": not found" Dec 13 14:01:06.767157 kubelet[2178]: E1213 14:01:06.767049 2178 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\": not found" containerID="344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508" Dec 13 14:01:06.767157 kubelet[2178]: I1213 14:01:06.767073 2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508"} err="failed to get container status \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\": rpc error: code = NotFound desc = an error occurred when try to find container \"344124a90ad117cc446b20a61b2f4ef28d85816d2538bde6c80d4ca0e9ec3508\": not found" Dec 13 14:01:06.767157 kubelet[2178]: I1213 14:01:06.767082 2178 scope.go:117] "RemoveContainer" containerID="c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9" Dec 13 14:01:06.767986 env[1323]: time="2024-12-13T14:01:06.767953924Z" level=info msg="RemoveContainer for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\"" Dec 13 14:01:06.769905 env[1323]: time="2024-12-13T14:01:06.769871157Z" level=info msg="RemoveContainer for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" returns successfully" Dec 13 14:01:06.770113 kubelet[2178]: I1213 14:01:06.770034 2178 scope.go:117] "RemoveContainer" containerID="c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9" Dec 13 14:01:06.770480 env[1323]: time="2024-12-13T14:01:06.770432864Z" level=error msg="ContainerStatus for \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\": not found" Dec 13 14:01:06.770723 kubelet[2178]: E1213 14:01:06.770671 2178 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\": not found" containerID="c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9" Dec 13 14:01:06.770723 kubelet[2178]: I1213 14:01:06.770699 2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9"} err="failed to get container status \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c032c02f633d2088de5f6fd124b991d3978516dd31c113bf37817bfefcccb6a9\": not found" Dec 13 14:01:07.096633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72ab615638e0ea89a742db4e32eb95716cea782fdacc4c8a894a0c28432ba7cd-rootfs.mount: Deactivated successfully. Dec 13 14:01:07.096786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719-rootfs.mount: Deactivated successfully. Dec 13 14:01:07.096878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02cf4630eb0a3e7359f7fc0c0f2108f51e3983843eff6e3ab788b7589ebd6719-shm.mount: Deactivated successfully. Dec 13 14:01:07.096960 systemd[1]: var-lib-kubelet-pods-cb4ae995\x2dabc2\x2d457d\x2da5ea\x2d088f4d0ec161-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjbvd.mount: Deactivated successfully. Dec 13 14:01:07.097045 systemd[1]: var-lib-kubelet-pods-aa7be849\x2d98f2\x2d4a76\x2db6fb\x2dfbbe6335789a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzwtnd.mount: Deactivated successfully. Dec 13 14:01:07.097131 systemd[1]: var-lib-kubelet-pods-cb4ae995\x2dabc2\x2d457d\x2da5ea\x2d088f4d0ec161-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:01:07.097212 systemd[1]: var-lib-kubelet-pods-cb4ae995\x2dabc2\x2d457d\x2da5ea\x2d088f4d0ec161-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:01:07.514844 kubelet[2178]: I1213 14:01:07.514804 2178 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aa7be849-98f2-4a76-b6fb-fbbe6335789a" path="/var/lib/kubelet/pods/aa7be849-98f2-4a76-b6fb-fbbe6335789a/volumes" Dec 13 14:01:07.515306 kubelet[2178]: I1213 14:01:07.515256 2178 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" path="/var/lib/kubelet/pods/cb4ae995-abc2-457d-a5ea-088f4d0ec161/volumes" Dec 13 14:01:08.057878 sshd[3601]: pam_unix(sshd:session): session closed for user core Dec 13 14:01:08.059556 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:40364.service. Dec 13 14:01:08.060795 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:40360.service: Deactivated successfully. Dec 13 14:01:08.061562 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:01:08.062521 systemd-logind[1308]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:01:08.063443 systemd-logind[1308]: Removed session 23. Dec 13 14:01:08.099893 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 40364 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:01:08.101471 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:01:08.105555 systemd-logind[1308]: New session 24 of user core. Dec 13 14:01:08.106175 systemd[1]: Started session-24.scope. Dec 13 14:01:09.088339 sshd[3769]: pam_unix(sshd:session): session closed for user core Dec 13 14:01:09.090228 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:40374.service. Dec 13 14:01:09.100394 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:40364.service: Deactivated successfully. Dec 13 14:01:09.105232 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:01:09.106628 systemd-logind[1308]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:01:09.109617 kubelet[2178]: I1213 14:01:09.109586 2178 topology_manager.go:215] "Topology Admit Handler" podUID="897efa95-7ebf-4186-848f-39374360a689" podNamespace="kube-system" podName="cilium-vw2z8" Dec 13 14:01:09.110234 kubelet[2178]: E1213 14:01:09.110210 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa7be849-98f2-4a76-b6fb-fbbe6335789a" containerName="cilium-operator" Dec 13 14:01:09.110320 kubelet[2178]: E1213 14:01:09.110309 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" containerName="mount-bpf-fs" Dec 13 14:01:09.110713 kubelet[2178]: E1213 14:01:09.110661 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" containerName="cilium-agent" Dec 13 14:01:09.110854 kubelet[2178]: E1213 14:01:09.110838 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" containerName="mount-cgroup" Dec 13 14:01:09.110994 kubelet[2178]: E1213 14:01:09.110975 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" containerName="apply-sysctl-overwrites" Dec 13 14:01:09.111066 kubelet[2178]: E1213 14:01:09.111055 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" containerName="clean-cilium-state" Dec 13 14:01:09.111147 kubelet[2178]: I1213 14:01:09.111135 2178 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7be849-98f2-4a76-b6fb-fbbe6335789a" containerName="cilium-operator" Dec 13 14:01:09.111207 kubelet[2178]: I1213 14:01:09.111199 2178 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4ae995-abc2-457d-a5ea-088f4d0ec161" containerName="cilium-agent" Dec 13 14:01:09.119669 systemd-logind[1308]: Removed session 24. Dec 13 14:01:09.135413 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 40374 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:01:09.134512 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:01:09.142556 systemd[1]: Started session-25.scope. Dec 13 14:01:09.142912 systemd-logind[1308]: New session 25 of user core. Dec 13 14:01:09.154056 kubelet[2178]: I1213 14:01:09.154003 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/897efa95-7ebf-4186-848f-39374360a689-cilium-config-path\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154168 kubelet[2178]: I1213 14:01:09.154088 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-run\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154168 kubelet[2178]: I1213 14:01:09.154114 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-clustermesh-secrets\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154232 kubelet[2178]: I1213 14:01:09.154172 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-lib-modules\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154232 kubelet[2178]: I1213 14:01:09.154192 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-xtables-lock\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154304 kubelet[2178]: I1213 14:01:09.154253 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-cilium-ipsec-secrets\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154304 kubelet[2178]: I1213 14:01:09.154289 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-hostproc\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154348 kubelet[2178]: I1213 14:01:09.154311 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-hubble-tls\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154348 kubelet[2178]: I1213 14:01:09.154331 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-net\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154412 kubelet[2178]: I1213 14:01:09.154400 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-bpf-maps\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154438 kubelet[2178]: I1213 14:01:09.154422 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-cgroup\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154487 kubelet[2178]: I1213 14:01:09.154469 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cni-path\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154519 kubelet[2178]: I1213 14:01:09.154496 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-etc-cni-netd\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154543 kubelet[2178]: I1213 14:01:09.154538 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-kernel\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.154570 kubelet[2178]: I1213 14:01:09.154558 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jc2\" (UniqueName: \"kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-kube-api-access-s5jc2\") pod \"cilium-vw2z8\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " pod="kube-system/cilium-vw2z8" Dec 13 14:01:09.304120 sshd[3781]: pam_unix(sshd:session): session closed for user core Dec 13 14:01:09.306605 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:40378.service. Dec 13 14:01:09.309727 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:40374.service: Deactivated successfully. Dec 13 14:01:09.310497 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:01:09.311642 systemd-logind[1308]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:01:09.317315 systemd-logind[1308]: Removed session 25. Dec 13 14:01:09.319755 kubelet[2178]: E1213 14:01:09.319729 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:09.321484 env[1323]: time="2024-12-13T14:01:09.320676348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw2z8,Uid:897efa95-7ebf-4186-848f-39374360a689,Namespace:kube-system,Attempt:0,}" Dec 13 14:01:09.343309 env[1323]: time="2024-12-13T14:01:09.343150040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:01:09.343309 env[1323]: time="2024-12-13T14:01:09.343254118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:01:09.343309 env[1323]: time="2024-12-13T14:01:09.343282718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:01:09.343637 env[1323]: time="2024-12-13T14:01:09.343596714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8 pid=3811 runtime=io.containerd.runc.v2 Dec 13 14:01:09.359992 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 40378 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:01:09.361821 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:01:09.367915 systemd-logind[1308]: New session 26 of user core. Dec 13 14:01:09.368498 systemd[1]: Started session-26.scope. Dec 13 14:01:09.390099 env[1323]: time="2024-12-13T14:01:09.390041357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw2z8,Uid:897efa95-7ebf-4186-848f-39374360a689,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8\"" Dec 13 14:01:09.390829 kubelet[2178]: E1213 14:01:09.390796 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:09.394696 env[1323]: time="2024-12-13T14:01:09.394654374Z" level=info msg="CreateContainer within sandbox \"a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:01:09.408339 env[1323]: time="2024-12-13T14:01:09.408290666Z" level=info msg="CreateContainer within sandbox \"a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0\"" Dec 13 14:01:09.409574 env[1323]: time="2024-12-13T14:01:09.408938458Z" level=info msg="StartContainer for \"045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0\"" Dec 13 14:01:09.457531 env[1323]: time="2024-12-13T14:01:09.457480472Z" level=info msg="StartContainer for \"045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0\" returns successfully" Dec 13 14:01:09.511038 env[1323]: time="2024-12-13T14:01:09.510970178Z" level=info msg="shim disconnected" id=045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0 Dec 13 14:01:09.511038 env[1323]: time="2024-12-13T14:01:09.511026697Z" level=warning msg="cleaning up after shim disconnected" id=045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0 namespace=k8s.io Dec 13 14:01:09.511038 env[1323]: time="2024-12-13T14:01:09.511039257Z" level=info msg="cleaning up dead shim" Dec 13 14:01:09.520229 env[1323]: time="2024-12-13T14:01:09.520170212Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3901 runtime=io.containerd.runc.v2\n" Dec 13 14:01:09.751249 env[1323]: time="2024-12-13T14:01:09.750744010Z" level=info msg="StopPodSandbox for \"a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8\"" Dec 13 14:01:09.751249 env[1323]: time="2024-12-13T14:01:09.750805929Z" level=info msg="Container to stop \"045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:09.776819 env[1323]: time="2024-12-13T14:01:09.776767213Z" level=info msg="shim disconnected" id=a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8 Dec 13 14:01:09.777516 env[1323]: time="2024-12-13T14:01:09.777491283Z" level=warning msg="cleaning up after shim disconnected" id=a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8 namespace=k8s.io Dec 13 14:01:09.777611 env[1323]: time="2024-12-13T14:01:09.777597242Z" level=info msg="cleaning up dead shim" Dec 13 14:01:09.785786 env[1323]: time="2024-12-13T14:01:09.785749450Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3934 runtime=io.containerd.runc.v2\n" Dec 13 14:01:09.786166 env[1323]: time="2024-12-13T14:01:09.786137444Z" level=info msg="TearDown network for sandbox \"a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8\" successfully" Dec 13 14:01:09.786278 env[1323]: time="2024-12-13T14:01:09.786255003Z" level=info msg="StopPodSandbox for \"a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8\" returns successfully" Dec 13 14:01:09.860437 kubelet[2178]: I1213 14:01:09.860359 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-hostproc\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860437 kubelet[2178]: I1213 14:01:09.860430 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-lib-modules\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860612 kubelet[2178]: I1213 14:01:09.860476 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-cilium-ipsec-secrets\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860612 kubelet[2178]: I1213 14:01:09.860498 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cni-path\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860612 kubelet[2178]: I1213 14:01:09.860494 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-hostproc" (OuterVolumeSpecName: "hostproc") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.860612 kubelet[2178]: I1213 14:01:09.860520 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/897efa95-7ebf-4186-848f-39374360a689-cilium-config-path\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860612 kubelet[2178]: I1213 14:01:09.860539 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-net\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860612 kubelet[2178]: I1213 14:01:09.860565 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-kernel\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860812 kubelet[2178]: I1213 14:01:09.860588 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jc2\" (UniqueName: \"kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-kube-api-access-s5jc2\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860812 kubelet[2178]: I1213 14:01:09.860606 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-cgroup\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860812 kubelet[2178]: I1213 14:01:09.860628 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-run\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860812 kubelet[2178]: I1213 14:01:09.860646 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-etc-cni-netd\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860812 kubelet[2178]: I1213 14:01:09.860668 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-clustermesh-secrets\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860812 kubelet[2178]: I1213 14:01:09.860686 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-hubble-tls\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860954 kubelet[2178]: I1213 14:01:09.860709 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-bpf-maps\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860954 kubelet[2178]: I1213 14:01:09.860738 2178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-xtables-lock\") pod \"897efa95-7ebf-4186-848f-39374360a689\" (UID: \"897efa95-7ebf-4186-848f-39374360a689\") " Dec 13 14:01:09.860954 kubelet[2178]: I1213 14:01:09.860764 2178 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.860954 kubelet[2178]: I1213 14:01:09.860800 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.860954 kubelet[2178]: I1213 14:01:09.860822 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cni-path" (OuterVolumeSpecName: "cni-path") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863143 kubelet[2178]: I1213 14:01:09.861125 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863143 kubelet[2178]: I1213 14:01:09.861163 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863143 kubelet[2178]: I1213 14:01:09.861183 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863143 kubelet[2178]: I1213 14:01:09.861200 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863143 kubelet[2178]: I1213 14:01:09.861732 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863417 kubelet[2178]: I1213 14:01:09.862363 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863417 kubelet[2178]: I1213 14:01:09.862410 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:09.863417 kubelet[2178]: I1213 14:01:09.863064 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897efa95-7ebf-4186-848f-39374360a689-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:01:09.863855 kubelet[2178]: I1213 14:01:09.863831 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:01:09.864002 kubelet[2178]: I1213 14:01:09.863978 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-kube-api-access-s5jc2" (OuterVolumeSpecName: "kube-api-access-s5jc2") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "kube-api-access-s5jc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:09.864199 kubelet[2178]: I1213 14:01:09.864175 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:01:09.866420 kubelet[2178]: I1213 14:01:09.865929 2178 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "897efa95-7ebf-4186-848f-39374360a689" (UID: "897efa95-7ebf-4186-848f-39374360a689"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:09.961473 kubelet[2178]: I1213 14:01:09.961431 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961473 kubelet[2178]: I1213 14:01:09.961478 2178 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961498 2178 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961519 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/897efa95-7ebf-4186-848f-39374360a689-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961541 2178 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961559 2178 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961576 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961593 2178 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s5jc2\" (UniqueName: \"kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-kube-api-access-s5jc2\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961602 2178 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961640 kubelet[2178]: I1213 14:01:09.961611 2178 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961833 kubelet[2178]: I1213 14:01:09.961622 2178 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/897efa95-7ebf-4186-848f-39374360a689-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961833 kubelet[2178]: I1213 14:01:09.961633 2178 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/897efa95-7ebf-4186-848f-39374360a689-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961833 kubelet[2178]: I1213 14:01:09.961642 2178 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:09.961833 kubelet[2178]: I1213 14:01:09.961652 2178 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/897efa95-7ebf-4186-848f-39374360a689-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:01:10.260502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a76dc756447711bcd76b12e2971dfd43533f9d94ec5f8305ea25cf0530c133d8-shm.mount: Deactivated successfully. Dec 13 14:01:10.260648 systemd[1]: var-lib-kubelet-pods-897efa95\x2d7ebf\x2d4186\x2d848f\x2d39374360a689-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5jc2.mount: Deactivated successfully. Dec 13 14:01:10.260734 systemd[1]: var-lib-kubelet-pods-897efa95\x2d7ebf\x2d4186\x2d848f\x2d39374360a689-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:01:10.260814 systemd[1]: var-lib-kubelet-pods-897efa95\x2d7ebf\x2d4186\x2d848f\x2d39374360a689-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:01:10.260889 systemd[1]: var-lib-kubelet-pods-897efa95\x2d7ebf\x2d4186\x2d848f\x2d39374360a689-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:01:10.755181 kubelet[2178]: I1213 14:01:10.755147 2178 scope.go:117] "RemoveContainer" containerID="045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0" Dec 13 14:01:10.760403 env[1323]: time="2024-12-13T14:01:10.760343336Z" level=info msg="RemoveContainer for \"045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0\"" Dec 13 14:01:10.764048 env[1323]: time="2024-12-13T14:01:10.764002738Z" level=info msg="RemoveContainer for \"045c6f1d876764777373116dba9a329875f2607cf957472c96128cadae5745f0\" returns successfully" Dec 13 14:01:10.803670 kubelet[2178]: I1213 14:01:10.803627 2178 topology_manager.go:215] "Topology Admit Handler" podUID="c026bcbd-2364-4ab4-ba16-1da432e4879d" podNamespace="kube-system" podName="cilium-q57gb" Dec 13 14:01:10.803856 kubelet[2178]: E1213 14:01:10.803843 2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="897efa95-7ebf-4186-848f-39374360a689" containerName="mount-cgroup" Dec 13 14:01:10.803954 kubelet[2178]: I1213 14:01:10.803941 2178 memory_manager.go:354] "RemoveStaleState removing state" podUID="897efa95-7ebf-4186-848f-39374360a689" containerName="mount-cgroup" Dec 13 14:01:10.869192 kubelet[2178]: I1213 14:01:10.869141 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-bpf-maps\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869192 kubelet[2178]: I1213 14:01:10.869198 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-cilium-cgroup\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869407 kubelet[2178]: I1213 14:01:10.869235 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-host-proc-sys-net\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869407 kubelet[2178]: I1213 14:01:10.869261 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-host-proc-sys-kernel\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869407 kubelet[2178]: I1213 14:01:10.869280 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fm7x\" (UniqueName: \"kubernetes.io/projected/c026bcbd-2364-4ab4-ba16-1da432e4879d-kube-api-access-6fm7x\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869407 kubelet[2178]: I1213 14:01:10.869310 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-cilium-run\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869407 kubelet[2178]: I1213 14:01:10.869332 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c026bcbd-2364-4ab4-ba16-1da432e4879d-clustermesh-secrets\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869540 kubelet[2178]: I1213 14:01:10.869352 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-hostproc\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869540 kubelet[2178]: I1213 14:01:10.869387 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-lib-modules\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869540 kubelet[2178]: I1213 14:01:10.869407 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-etc-cni-netd\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869540 kubelet[2178]: I1213 14:01:10.869427 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-xtables-lock\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869540 kubelet[2178]: I1213 14:01:10.869453 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c026bcbd-2364-4ab4-ba16-1da432e4879d-cilium-ipsec-secrets\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869540 kubelet[2178]: I1213 14:01:10.869474 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c026bcbd-2364-4ab4-ba16-1da432e4879d-hubble-tls\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869680 kubelet[2178]: I1213 14:01:10.869493 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c026bcbd-2364-4ab4-ba16-1da432e4879d-cni-path\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:10.869680 kubelet[2178]: I1213 14:01:10.869520 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c026bcbd-2364-4ab4-ba16-1da432e4879d-cilium-config-path\") pod \"cilium-q57gb\" (UID: \"c026bcbd-2364-4ab4-ba16-1da432e4879d\") " pod="kube-system/cilium-q57gb" Dec 13 14:01:11.109583 kubelet[2178]: E1213 14:01:11.109458 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:11.110517 env[1323]: time="2024-12-13T14:01:11.110479054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q57gb,Uid:c026bcbd-2364-4ab4-ba16-1da432e4879d,Namespace:kube-system,Attempt:0,}" Dec 13 14:01:11.123324 env[1323]: time="2024-12-13T14:01:11.123253043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:01:11.123324 env[1323]: time="2024-12-13T14:01:11.123294643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:01:11.123324 env[1323]: time="2024-12-13T14:01:11.123305282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:01:11.123525 env[1323]: time="2024-12-13T14:01:11.123460121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d pid=3964 runtime=io.containerd.runc.v2 Dec 13 14:01:11.159038 env[1323]: time="2024-12-13T14:01:11.158991747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q57gb,Uid:c026bcbd-2364-4ab4-ba16-1da432e4879d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\"" Dec 13 14:01:11.159640 kubelet[2178]: E1213 14:01:11.159617 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:11.161660 env[1323]: time="2024-12-13T14:01:11.161630528Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:01:11.180593 env[1323]: time="2024-12-13T14:01:11.180545433Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2dba9f344d56b1977f08bbbbc1c885e057321030e1b11ad4e0ec9c205be36d07\"" Dec 13 14:01:11.181153 env[1323]: time="2024-12-13T14:01:11.181118189Z" level=info msg="StartContainer for \"2dba9f344d56b1977f08bbbbc1c885e057321030e1b11ad4e0ec9c205be36d07\"" Dec 13 14:01:11.223431 env[1323]: time="2024-12-13T14:01:11.222441814Z" level=info msg="StartContainer for \"2dba9f344d56b1977f08bbbbc1c885e057321030e1b11ad4e0ec9c205be36d07\" returns successfully" Dec 13 14:01:11.251679 env[1323]: time="2024-12-13T14:01:11.251628365Z" level=info msg="shim disconnected" id=2dba9f344d56b1977f08bbbbc1c885e057321030e1b11ad4e0ec9c205be36d07 Dec 13 14:01:11.251679 env[1323]: time="2024-12-13T14:01:11.251675365Z" level=warning msg="cleaning up after shim disconnected" id=2dba9f344d56b1977f08bbbbc1c885e057321030e1b11ad4e0ec9c205be36d07 namespace=k8s.io Dec 13 14:01:11.251679 env[1323]: time="2024-12-13T14:01:11.251685165Z" level=info msg="cleaning up dead shim" Dec 13 14:01:11.258351 env[1323]: time="2024-12-13T14:01:11.258317517Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4050 runtime=io.containerd.runc.v2\n" Dec 13 14:01:11.515336 kubelet[2178]: I1213 14:01:11.515301 2178 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="897efa95-7ebf-4186-848f-39374360a689" path="/var/lib/kubelet/pods/897efa95-7ebf-4186-848f-39374360a689/volumes" Dec 13 14:01:11.585039 kubelet[2178]: E1213 14:01:11.585005 2178 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:01:11.758468 kubelet[2178]: E1213 14:01:11.758439 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:11.761335 env[1323]: time="2024-12-13T14:01:11.760595526Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:01:11.787247 env[1323]: time="2024-12-13T14:01:11.786539860Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6586ff09739e1eea9dcf4ff6810862cb6cf527fd8da3de0d4b11fb7b8db4486\"" Dec 13 14:01:11.786736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132463959.mount: Deactivated successfully. Dec 13 14:01:11.790916 env[1323]: time="2024-12-13T14:01:11.790547672Z" level=info msg="StartContainer for \"c6586ff09739e1eea9dcf4ff6810862cb6cf527fd8da3de0d4b11fb7b8db4486\"" Dec 13 14:01:11.856593 env[1323]: time="2024-12-13T14:01:11.856541000Z" level=info msg="StartContainer for \"c6586ff09739e1eea9dcf4ff6810862cb6cf527fd8da3de0d4b11fb7b8db4486\" returns successfully" Dec 13 14:01:11.879207 env[1323]: time="2024-12-13T14:01:11.879163438Z" level=info msg="shim disconnected" id=c6586ff09739e1eea9dcf4ff6810862cb6cf527fd8da3de0d4b11fb7b8db4486 Dec 13 14:01:11.879207 env[1323]: time="2024-12-13T14:01:11.879204518Z" level=warning msg="cleaning up after shim disconnected" id=c6586ff09739e1eea9dcf4ff6810862cb6cf527fd8da3de0d4b11fb7b8db4486 namespace=k8s.io Dec 13 14:01:11.879207 env[1323]: time="2024-12-13T14:01:11.879220758Z" level=info msg="cleaning up dead shim" Dec 13 14:01:11.886032 env[1323]: time="2024-12-13T14:01:11.885989549Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4112 runtime=io.containerd.runc.v2\n" Dec 13 14:01:12.260679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6586ff09739e1eea9dcf4ff6810862cb6cf527fd8da3de0d4b11fb7b8db4486-rootfs.mount: Deactivated successfully. Dec 13 14:01:12.765065 kubelet[2178]: E1213 14:01:12.764697 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:12.772488 env[1323]: time="2024-12-13T14:01:12.769063357Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:01:12.779142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858876077.mount: Deactivated successfully. Dec 13 14:01:12.785757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759629898.mount: Deactivated successfully. Dec 13 14:01:12.788039 env[1323]: time="2024-12-13T14:01:12.787994961Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed64de2fd06aa2fee058697122c1092ced94810078169319ca40f5edffdfc75c\"" Dec 13 14:01:12.788512 env[1323]: time="2024-12-13T14:01:12.788484599Z" level=info msg="StartContainer for \"ed64de2fd06aa2fee058697122c1092ced94810078169319ca40f5edffdfc75c\"" Dec 13 14:01:12.850640 env[1323]: time="2024-12-13T14:01:12.850597589Z" level=info msg="StartContainer for \"ed64de2fd06aa2fee058697122c1092ced94810078169319ca40f5edffdfc75c\" returns successfully" Dec 13 14:01:12.878023 env[1323]: time="2024-12-13T14:01:12.877948319Z" level=info msg="shim disconnected" id=ed64de2fd06aa2fee058697122c1092ced94810078169319ca40f5edffdfc75c Dec 13 14:01:12.878023 env[1323]: time="2024-12-13T14:01:12.878021199Z" level=warning msg="cleaning up after shim disconnected" id=ed64de2fd06aa2fee058697122c1092ced94810078169319ca40f5edffdfc75c namespace=k8s.io Dec 13 14:01:12.878251 env[1323]: time="2024-12-13T14:01:12.878032319Z" level=info msg="cleaning up dead shim" Dec 13 14:01:12.892286 env[1323]: time="2024-12-13T14:01:12.892243662Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4169 runtime=io.containerd.runc.v2\n" Dec 13 14:01:13.039945 kubelet[2178]: I1213 14:01:13.039539 2178 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:01:13Z","lastTransitionTime":"2024-12-13T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:01:13.768517 kubelet[2178]: E1213 14:01:13.768488 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:13.774210 env[1323]: time="2024-12-13T14:01:13.774146099Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:01:13.810798 env[1323]: time="2024-12-13T14:01:13.810754542Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9891a149967b7f5eb6a6fa4c754a6f7b412a6a4628a2acd8bc5c7e31d840eb80\"" Dec 13 14:01:13.811466 env[1323]: time="2024-12-13T14:01:13.811418262Z" level=info msg="StartContainer for \"9891a149967b7f5eb6a6fa4c754a6f7b412a6a4628a2acd8bc5c7e31d840eb80\"" Dec 13 14:01:13.857829 env[1323]: time="2024-12-13T14:01:13.857786216Z" level=info msg="StartContainer for \"9891a149967b7f5eb6a6fa4c754a6f7b412a6a4628a2acd8bc5c7e31d840eb80\" returns successfully" Dec 13 14:01:13.874812 env[1323]: time="2024-12-13T14:01:13.874762839Z" level=info msg="shim disconnected" id=9891a149967b7f5eb6a6fa4c754a6f7b412a6a4628a2acd8bc5c7e31d840eb80 Dec 13 14:01:13.874812 env[1323]: time="2024-12-13T14:01:13.874807639Z" level=warning msg="cleaning up after shim disconnected" id=9891a149967b7f5eb6a6fa4c754a6f7b412a6a4628a2acd8bc5c7e31d840eb80 namespace=k8s.io Dec 13 14:01:13.874812 env[1323]: time="2024-12-13T14:01:13.874817119Z" level=info msg="cleaning up dead shim" Dec 13 14:01:13.882161 env[1323]: time="2024-12-13T14:01:13.882099872Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4227 runtime=io.containerd.runc.v2\n" Dec 13 14:01:14.260857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9891a149967b7f5eb6a6fa4c754a6f7b412a6a4628a2acd8bc5c7e31d840eb80-rootfs.mount: Deactivated successfully. Dec 13 14:01:14.513872 kubelet[2178]: E1213 14:01:14.513767 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:14.773086 kubelet[2178]: E1213 14:01:14.772152 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:14.775634 env[1323]: time="2024-12-13T14:01:14.775577143Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:01:14.793396 env[1323]: time="2024-12-13T14:01:14.793316738Z" level=info msg="CreateContainer within sandbox \"d86756a26e091b016ecfd199e34ecf39171b19a491ab78414d8b68ce0bc30c1d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f6c4ba05820f0e2fb358fe88aec999bbda2ba08d51f3a64f2e32290176807ef\"" Dec 13 14:01:14.794551 env[1323]: time="2024-12-13T14:01:14.794525060Z" level=info msg="StartContainer for \"5f6c4ba05820f0e2fb358fe88aec999bbda2ba08d51f3a64f2e32290176807ef\"" Dec 13 14:01:14.846565 env[1323]: time="2024-12-13T14:01:14.846518641Z" level=info msg="StartContainer for \"5f6c4ba05820f0e2fb358fe88aec999bbda2ba08d51f3a64f2e32290176807ef\" returns successfully" Dec 13 14:01:15.102414 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:01:15.777115 kubelet[2178]: E1213 14:01:15.776796 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:15.791269 kubelet[2178]: I1213 14:01:15.791224 2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q57gb" podStartSLOduration=5.791174368 podStartE2EDuration="5.791174368s" podCreationTimestamp="2024-12-13 14:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:01:15.790701246 +0000 UTC m=+94.377712616" watchObservedRunningTime="2024-12-13 14:01:15.791174368 +0000 UTC m=+94.378185738" Dec 13 14:01:17.110665 kubelet[2178]: E1213 14:01:17.110631 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:17.671080 systemd[1]: run-containerd-runc-k8s.io-5f6c4ba05820f0e2fb358fe88aec999bbda2ba08d51f3a64f2e32290176807ef-runc.9BQu8W.mount: Deactivated successfully. Dec 13 14:01:17.751292 systemd-networkd[1101]: lxc_health: Link UP Dec 13 14:01:17.759468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:01:17.760138 systemd-networkd[1101]: lxc_health: Gained carrier Dec 13 14:01:18.956761 systemd-networkd[1101]: lxc_health: Gained IPv6LL Dec 13 14:01:19.112120 kubelet[2178]: E1213 14:01:19.112042 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:19.788370 kubelet[2178]: E1213 14:01:19.788178 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:20.789566 kubelet[2178]: E1213 14:01:20.789524 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:21.977868 systemd[1]: run-containerd-runc-k8s.io-5f6c4ba05820f0e2fb358fe88aec999bbda2ba08d51f3a64f2e32290176807ef-runc.G5yzO7.mount: Deactivated successfully. Dec 13 14:01:24.094229 systemd[1]: run-containerd-runc-k8s.io-5f6c4ba05820f0e2fb358fe88aec999bbda2ba08d51f3a64f2e32290176807ef-runc.dsKIJj.mount: Deactivated successfully. Dec 13 14:01:24.167586 sshd[3799]: pam_unix(sshd:session): session closed for user core Dec 13 14:01:24.170431 systemd-logind[1308]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:01:24.170649 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:40378.service: Deactivated successfully. Dec 13 14:01:24.171403 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:01:24.171756 systemd-logind[1308]: Removed session 26. Dec 13 14:01:25.513379 kubelet[2178]: E1213 14:01:25.513323 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"