Feb 9 18:41:46.719218 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:41:46.719237 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:41:46.719245 kernel: efi: EFI v2.70 by EDK II Feb 9 18:41:46.719263 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:41:46.719268 kernel: random: crng init done Feb 9 18:41:46.719273 kernel: ACPI: Early table checksum verification disabled Feb 9 18:41:46.719280 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:41:46.719287 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:41:46.719292 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719298 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719303 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719309 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719314 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719319 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719327 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719333 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719339 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:41:46.719345 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:41:46.719351 kernel: NUMA: Failed to initialise from firmware Feb 9 18:41:46.719356 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:41:46.719362 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 18:41:46.719368 kernel: Zone ranges: Feb 9 18:41:46.719374 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:41:46.719380 kernel: DMA32 empty Feb 9 18:41:46.719386 kernel: Normal empty Feb 9 18:41:46.719392 kernel: Movable zone start for each node Feb 9 18:41:46.719397 kernel: Early memory node ranges Feb 9 18:41:46.719403 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:41:46.719409 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:41:46.719414 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:41:46.719420 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:41:46.719426 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:41:46.719431 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:41:46.719437 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:41:46.719443 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:41:46.719450 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:41:46.719456 kernel: psci: probing for conduit method from ACPI. Feb 9 18:41:46.719461 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:41:46.719467 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:41:46.719473 kernel: psci: Trusted OS migration not required Feb 9 18:41:46.719481 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:41:46.719487 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:41:46.719494 kernel: ACPI: SRAT not present Feb 9 18:41:46.719500 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:41:46.719507 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:41:46.719513 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:41:46.719519 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:41:46.719525 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:41:46.719531 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:41:46.719537 kernel: CPU features: detected: Spectre-v4 Feb 9 18:41:46.719543 kernel: CPU features: detected: Spectre-BHB Feb 9 18:41:46.719550 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:41:46.719556 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:41:46.719563 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:41:46.719569 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:41:46.719575 kernel: Policy zone: DMA Feb 9 18:41:46.719582 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:41:46.719588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:41:46.719594 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:41:46.719601 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:41:46.719607 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:41:46.719613 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 18:41:46.719620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:41:46.719626 kernel: trace event string verifier disabled Feb 9 18:41:46.719633 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:41:46.719639 kernel: rcu: RCU event tracing is enabled. Feb 9 18:41:46.719645 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:41:46.719652 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:41:46.719658 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:41:46.719664 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:41:46.719670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:41:46.719676 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:41:46.719682 kernel: GICv3: 256 SPIs implemented Feb 9 18:41:46.719690 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:41:46.719696 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:41:46.719702 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:41:46.719708 kernel: GICv3: 16 PPIs implemented Feb 9 18:41:46.719714 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:41:46.719720 kernel: ACPI: SRAT not present Feb 9 18:41:46.719726 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:41:46.719732 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:41:46.719739 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:41:46.719745 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:41:46.719751 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:41:46.719757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:41:46.719765 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:41:46.719771 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:41:46.719777 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:41:46.719784 kernel: arm-pv: using stolen time PV Feb 9 18:41:46.719790 kernel: Console: colour dummy device 80x25 Feb 9 18:41:46.719797 kernel: ACPI: Core revision 20210730 Feb 9 18:41:46.719803 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:41:46.719810 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:41:46.719816 kernel: LSM: Security Framework initializing Feb 9 18:41:46.719822 kernel: SELinux: Initializing. Feb 9 18:41:46.719830 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:41:46.719836 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:41:46.719842 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:41:46.719849 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:41:46.719855 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:41:46.719861 kernel: Remapping and enabling EFI services. Feb 9 18:41:46.719867 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:41:46.719873 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:41:46.719880 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:41:46.719887 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:41:46.719894 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:41:46.719901 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:41:46.719907 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:41:46.719914 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:41:46.719920 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:41:46.719926 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:41:46.719933 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:41:46.719939 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:41:46.719945 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:41:46.719952 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:41:46.719964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:41:46.719971 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:41:46.719977 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:41:46.719988 kernel: SMP: Total of 4 processors activated. Feb 9 18:41:46.719996 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:41:46.720003 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:41:46.720009 kernel: CPU features: detected: Common not Private translations Feb 9 18:41:46.720016 kernel: CPU features: detected: CRC32 instructions Feb 9 18:41:46.720023 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:41:46.720029 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:41:46.720036 kernel: CPU features: detected: Privileged Access Never Feb 9 18:41:46.720044 kernel: CPU features: detected: RAS Extension Support Feb 9 18:41:46.720051 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:41:46.720057 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:41:46.720064 kernel: alternatives: patching kernel code Feb 9 18:41:46.720072 kernel: devtmpfs: initialized Feb 9 18:41:46.720078 kernel: KASLR enabled Feb 9 18:41:46.720085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:41:46.720092 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:41:46.720098 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:41:46.720105 kernel: SMBIOS 3.0.0 present. Feb 9 18:41:46.720111 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:41:46.720118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:41:46.720125 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:41:46.720131 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:41:46.720139 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:41:46.720146 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:41:46.720152 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 18:41:46.720159 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:41:46.720166 kernel: cpuidle: using governor menu Feb 9 18:41:46.720172 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:41:46.720179 kernel: ASID allocator initialised with 32768 entries Feb 9 18:41:46.720185 kernel: ACPI: bus type PCI registered Feb 9 18:41:46.720192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:41:46.720200 kernel: Serial: AMBA PL011 UART driver Feb 9 18:41:46.720207 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:41:46.720213 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:41:46.720220 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:41:46.720227 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:41:46.720233 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:41:46.720240 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:41:46.720260 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:41:46.720268 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:41:46.720276 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:41:46.720283 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:41:46.720289 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:41:46.720296 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:41:46.720303 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:41:46.720309 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:41:46.720316 kernel: ACPI: Interpreter enabled Feb 9 18:41:46.720322 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:41:46.720332 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:41:46.720340 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:41:46.720346 kernel: printk: console [ttyAMA0] enabled Feb 9 18:41:46.720353 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:41:46.720476 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:41:46.720543 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:41:46.720606 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:41:46.720666 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:41:46.720728 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:41:46.720737 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:41:46.720744 kernel: PCI host bridge to bus 0000:00 Feb 9 18:41:46.720811 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:41:46.720868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:41:46.720924 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:41:46.720990 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:41:46.721075 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:41:46.721150 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:41:46.721221 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:41:46.721299 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:41:46.721360 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:41:46.721420 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:41:46.721479 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:41:46.721543 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:41:46.721598 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:41:46.721650 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:41:46.721702 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:41:46.721711 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:41:46.721718 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:41:46.721725 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:41:46.721733 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:41:46.721740 kernel: iommu: Default domain type: Translated Feb 9 18:41:46.721747 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:41:46.721754 kernel: vgaarb: loaded Feb 9 18:41:46.721760 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:41:46.721767 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:41:46.721774 kernel: PTP clock support registered Feb 9 18:41:46.721780 kernel: Registered efivars operations Feb 9 18:41:46.721787 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:41:46.721793 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:41:46.721802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:41:46.721808 kernel: pnp: PnP ACPI init Feb 9 18:41:46.721876 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:41:46.721886 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:41:46.721893 kernel: NET: Registered PF_INET protocol family Feb 9 18:41:46.721900 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:41:46.721907 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:41:46.721913 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:41:46.721922 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:41:46.721929 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:41:46.721935 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:41:46.721942 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:41:46.721949 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:41:46.721956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:41:46.721969 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:41:46.721976 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:41:46.721984 kernel: kvm [1]: HYP mode not available Feb 9 18:41:46.721990 kernel: Initialise system trusted keyrings Feb 9 18:41:46.721998 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:41:46.722005 kernel: Key type asymmetric registered Feb 9 18:41:46.722011 kernel: Asymmetric key parser 'x509' registered Feb 9 18:41:46.722018 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:41:46.722024 kernel: io scheduler mq-deadline registered Feb 9 18:41:46.722031 kernel: io scheduler kyber registered Feb 9 18:41:46.722037 kernel: io scheduler bfq registered Feb 9 18:41:46.722044 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:41:46.722051 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:41:46.722059 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:41:46.722121 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:41:46.722130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:41:46.722136 kernel: thunder_xcv, ver 1.0 Feb 9 18:41:46.722143 kernel: thunder_bgx, ver 1.0 Feb 9 18:41:46.722149 kernel: nicpf, ver 1.0 Feb 9 18:41:46.722155 kernel: nicvf, ver 1.0 Feb 9 18:41:46.722222 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:41:46.722292 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:41:46 UTC (1707504106) Feb 9 18:41:46.722302 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:41:46.722308 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:41:46.722315 kernel: Segment Routing with IPv6 Feb 9 18:41:46.722321 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:41:46.722328 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:41:46.722334 kernel: Key type dns_resolver registered Feb 9 18:41:46.722341 kernel: registered taskstats version 1 Feb 9 18:41:46.722349 kernel: Loading compiled-in X.509 certificates Feb 9 18:41:46.722356 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:41:46.722363 kernel: Key type .fscrypt registered Feb 9 18:41:46.722369 kernel: Key type fscrypt-provisioning registered Feb 9 18:41:46.722376 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:41:46.722383 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:41:46.722389 kernel: ima: No architecture policies found Feb 9 18:41:46.722396 kernel: Freeing unused kernel memory: 34688K Feb 9 18:41:46.722402 kernel: Run /init as init process Feb 9 18:41:46.722410 kernel: with arguments: Feb 9 18:41:46.722417 kernel: /init Feb 9 18:41:46.722423 kernel: with environment: Feb 9 18:41:46.722430 kernel: HOME=/ Feb 9 18:41:46.722436 kernel: TERM=linux Feb 9 18:41:46.722442 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:41:46.722451 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:41:46.722459 systemd[1]: Detected virtualization kvm. Feb 9 18:41:46.722468 systemd[1]: Detected architecture arm64. Feb 9 18:41:46.722475 systemd[1]: Running in initrd. Feb 9 18:41:46.722482 systemd[1]: No hostname configured, using default hostname. Feb 9 18:41:46.722488 systemd[1]: Hostname set to . Feb 9 18:41:46.722496 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:41:46.722503 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:41:46.722510 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:41:46.722517 systemd[1]: Reached target cryptsetup.target. Feb 9 18:41:46.722525 systemd[1]: Reached target paths.target. Feb 9 18:41:46.722532 systemd[1]: Reached target slices.target. Feb 9 18:41:46.722539 systemd[1]: Reached target swap.target. Feb 9 18:41:46.722546 systemd[1]: Reached target timers.target. Feb 9 18:41:46.722554 systemd[1]: Listening on iscsid.socket. Feb 9 18:41:46.722561 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:41:46.722568 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:41:46.722576 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:41:46.722584 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:41:46.722591 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:41:46.722598 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:41:46.722606 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:41:46.722613 systemd[1]: Reached target sockets.target. Feb 9 18:41:46.722620 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:41:46.722627 systemd[1]: Finished network-cleanup.service. Feb 9 18:41:46.722633 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:41:46.722642 systemd[1]: Starting systemd-journald.service... Feb 9 18:41:46.722649 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:41:46.722656 systemd[1]: Starting systemd-resolved.service... Feb 9 18:41:46.722663 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:41:46.722670 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:41:46.722677 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:41:46.722684 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:41:46.722691 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:41:46.722699 kernel: audit: type=1130 audit(1707504106.716:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.722707 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:41:46.722717 systemd-journald[290]: Journal started Feb 9 18:41:46.722754 systemd-journald[290]: Runtime Journal (/run/log/journal/4f5e3c6a78c64335ac0814be1c25ee5d) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:41:46.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.711953 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 18:41:46.726278 systemd[1]: Started systemd-journald.service. Feb 9 18:41:46.726311 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:41:46.725179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:41:46.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.732603 systemd-resolved[292]: Positive Trust Anchors: Feb 9 18:41:46.735238 kernel: audit: type=1130 audit(1707504106.724:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.735267 kernel: audit: type=1130 audit(1707504106.727:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.735277 kernel: Bridge firewalling registered Feb 9 18:41:46.732617 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:41:46.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.732645 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:41:46.744198 kernel: audit: type=1130 audit(1707504106.735:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.734463 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:41:46.734647 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 18:41:46.747911 kernel: audit: type=1130 audit(1707504106.739:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.736596 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:41:46.736650 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 18:41:46.749748 kernel: SCSI subsystem initialized Feb 9 18:41:46.739502 systemd[1]: Started systemd-resolved.service. Feb 9 18:41:46.740094 systemd[1]: Reached target nss-lookup.target. Feb 9 18:41:46.751283 dracut-cmdline[307]: dracut-dracut-053 Feb 9 18:41:46.753363 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:41:46.759470 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:41:46.759511 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:41:46.759522 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:41:46.762053 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 18:41:46.762767 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:41:46.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.764120 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:41:46.766896 kernel: audit: type=1130 audit(1707504106.763:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.773105 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:41:46.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.776271 kernel: audit: type=1130 audit(1707504106.773:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.812271 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:41:46.822273 kernel: iscsi: registered transport (tcp) Feb 9 18:41:46.835273 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:41:46.835288 kernel: QLogic iSCSI HBA Driver Feb 9 18:41:46.867667 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:41:46.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.869083 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:41:46.871286 kernel: audit: type=1130 audit(1707504106.868:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:46.912275 kernel: raid6: neonx8 gen() 13681 MB/s Feb 9 18:41:46.929261 kernel: raid6: neonx8 xor() 10757 MB/s Feb 9 18:41:46.946266 kernel: raid6: neonx4 gen() 13491 MB/s Feb 9 18:41:46.963268 kernel: raid6: neonx4 xor() 11192 MB/s Feb 9 18:41:46.980261 kernel: raid6: neonx2 gen() 13044 MB/s Feb 9 18:41:46.997272 kernel: raid6: neonx2 xor() 10212 MB/s Feb 9 18:41:47.014272 kernel: raid6: neonx1 gen() 10500 MB/s Feb 9 18:41:47.031271 kernel: raid6: neonx1 xor() 8756 MB/s Feb 9 18:41:47.048268 kernel: raid6: int64x8 gen() 6279 MB/s Feb 9 18:41:47.065262 kernel: raid6: int64x8 xor() 3533 MB/s Feb 9 18:41:47.082269 kernel: raid6: int64x4 gen() 7250 MB/s Feb 9 18:41:47.099264 kernel: raid6: int64x4 xor() 3844 MB/s Feb 9 18:41:47.116269 kernel: raid6: int64x2 gen() 6133 MB/s Feb 9 18:41:47.133272 kernel: raid6: int64x2 xor() 3306 MB/s Feb 9 18:41:47.150268 kernel: raid6: int64x1 gen() 5030 MB/s Feb 9 18:41:47.167430 kernel: raid6: int64x1 xor() 2644 MB/s Feb 9 18:41:47.167457 kernel: raid6: using algorithm neonx8 gen() 13681 MB/s Feb 9 18:41:47.167474 kernel: raid6: .... xor() 10757 MB/s, rmw enabled Feb 9 18:41:47.167490 kernel: raid6: using neon recovery algorithm Feb 9 18:41:47.178271 kernel: xor: measuring software checksum speed Feb 9 18:41:47.179500 kernel: 8regs : 17246 MB/sec Feb 9 18:41:47.179512 kernel: 32regs : 20749 MB/sec Feb 9 18:41:47.182346 kernel: arm64_neon : 27920 MB/sec Feb 9 18:41:47.182357 kernel: xor: using function: arm64_neon (27920 MB/sec) Feb 9 18:41:47.238284 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:41:47.247555 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:41:47.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:47.250000 audit: BPF prog-id=7 op=LOAD Feb 9 18:41:47.250000 audit: BPF prog-id=8 op=LOAD Feb 9 18:41:47.250993 systemd[1]: Starting systemd-udevd.service... Feb 9 18:41:47.252148 kernel: audit: type=1130 audit(1707504107.248:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:47.264269 systemd-udevd[490]: Using default interface naming scheme 'v252'. Feb 9 18:41:47.267532 systemd[1]: Started systemd-udevd.service. Feb 9 18:41:47.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:47.269225 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:41:47.281587 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 18:41:47.306881 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:41:47.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:47.308284 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:41:47.342591 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:41:47.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:47.374018 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:41:47.380508 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:41:47.380538 kernel: GPT:9289727 != 19775487 Feb 9 18:41:47.380547 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:41:47.380557 kernel: GPT:9289727 != 19775487 Feb 9 18:41:47.381410 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:41:47.381425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:41:47.392988 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:41:47.395926 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:41:47.397475 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (537) Feb 9 18:41:47.396682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:41:47.406510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:41:47.409779 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:41:47.411317 systemd[1]: Starting disk-uuid.service... Feb 9 18:41:47.416690 disk-uuid[563]: Primary Header is updated. Feb 9 18:41:47.416690 disk-uuid[563]: Secondary Entries is updated. Feb 9 18:41:47.416690 disk-uuid[563]: Secondary Header is updated. Feb 9 18:41:47.420271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:41:48.425966 disk-uuid[564]: The operation has completed successfully. Feb 9 18:41:48.426993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:41:48.443935 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:41:48.444026 systemd[1]: Finished disk-uuid.service. Feb 9 18:41:48.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.449734 systemd[1]: Starting verity-setup.service... Feb 9 18:41:48.465295 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:41:48.482768 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:41:48.484769 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:41:48.486427 systemd[1]: Finished verity-setup.service. Feb 9 18:41:48.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.532262 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:41:48.532411 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:41:48.533157 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:41:48.533785 systemd[1]: Starting ignition-setup.service... Feb 9 18:41:48.535683 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:41:48.541346 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:41:48.541376 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:41:48.541385 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:41:48.549171 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:41:48.554237 systemd[1]: Finished ignition-setup.service. Feb 9 18:41:48.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.555525 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:41:48.615383 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:41:48.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.616000 audit: BPF prog-id=9 op=LOAD Feb 9 18:41:48.617312 systemd[1]: Starting systemd-networkd.service... Feb 9 18:41:48.639575 ignition[646]: Ignition 2.14.0 Feb 9 18:41:48.639586 ignition[646]: Stage: fetch-offline Feb 9 18:41:48.639621 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:41:48.639630 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:41:48.639763 ignition[646]: parsed url from cmdline: "" Feb 9 18:41:48.639766 ignition[646]: no config URL provided Feb 9 18:41:48.639770 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:41:48.639777 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:41:48.639794 ignition[646]: op(1): [started] loading QEMU firmware config module Feb 9 18:41:48.639798 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:41:48.649408 systemd-networkd[740]: lo: Link UP Feb 9 18:41:48.649414 systemd-networkd[740]: lo: Gained carrier Feb 9 18:41:48.651049 systemd-networkd[740]: Enumeration completed Feb 9 18:41:48.651230 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:41:48.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.652592 systemd[1]: Started systemd-networkd.service. Feb 9 18:41:48.653342 systemd[1]: Reached target network.target. Feb 9 18:41:48.654720 systemd-networkd[740]: eth0: Link UP Feb 9 18:41:48.654724 systemd-networkd[740]: eth0: Gained carrier Feb 9 18:41:48.655718 systemd[1]: Starting iscsiuio.service... Feb 9 18:41:48.658176 ignition[646]: op(1): [finished] loading QEMU firmware config module Feb 9 18:41:48.666423 systemd[1]: Started iscsiuio.service. Feb 9 18:41:48.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.667753 systemd[1]: Starting iscsid.service... Feb 9 18:41:48.670902 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:41:48.670902 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:41:48.670902 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:41:48.670902 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:41:48.670902 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:41:48.670902 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:41:48.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.673661 systemd[1]: Started iscsid.service. Feb 9 18:41:48.677687 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:41:48.680331 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:41:48.687282 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:41:48.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.688144 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:41:48.689354 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:41:48.690773 systemd[1]: Reached target remote-fs.target. Feb 9 18:41:48.692743 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:41:48.699903 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:41:48.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.741455 ignition[646]: parsing config with SHA512: 16cd72798ddb61da8bcb0a9520812ffd387cd0e2b20b1e3b27209c020f0b091d124adf0318cf79cb84442a34ca85810642cbac88d12446a58e57c793e9b67410 Feb 9 18:41:48.780399 unknown[646]: fetched base config from "system" Feb 9 18:41:48.781061 unknown[646]: fetched user config from "qemu" Feb 9 18:41:48.782344 ignition[646]: fetch-offline: fetch-offline passed Feb 9 18:41:48.783018 ignition[646]: Ignition finished successfully Feb 9 18:41:48.784546 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:41:48.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.785234 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:41:48.785872 systemd[1]: Starting ignition-kargs.service... Feb 9 18:41:48.793864 ignition[761]: Ignition 2.14.0 Feb 9 18:41:48.793872 ignition[761]: Stage: kargs Feb 9 18:41:48.793962 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:41:48.793972 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:41:48.795107 ignition[761]: kargs: kargs passed Feb 9 18:41:48.795148 ignition[761]: Ignition finished successfully Feb 9 18:41:48.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.796356 systemd[1]: Finished ignition-kargs.service. Feb 9 18:41:48.797807 systemd[1]: Starting ignition-disks.service... Feb 9 18:41:48.803752 ignition[767]: Ignition 2.14.0 Feb 9 18:41:48.803761 ignition[767]: Stage: disks Feb 9 18:41:48.803838 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:41:48.803847 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:41:48.806000 systemd[1]: Finished ignition-disks.service. Feb 9 18:41:48.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.804971 ignition[767]: disks: disks passed Feb 9 18:41:48.807362 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:41:48.805010 ignition[767]: Ignition finished successfully Feb 9 18:41:48.808271 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:41:48.809164 systemd[1]: Reached target local-fs.target. Feb 9 18:41:48.810146 systemd[1]: Reached target sysinit.target. Feb 9 18:41:48.811058 systemd[1]: Reached target basic.target. Feb 9 18:41:48.812671 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:41:48.822730 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:41:48.825681 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:41:48.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.829229 systemd[1]: Mounting sysroot.mount... Feb 9 18:41:48.835269 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:41:48.835303 systemd[1]: Mounted sysroot.mount. Feb 9 18:41:48.835849 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:41:48.837565 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:41:48.838244 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:41:48.838292 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:41:48.838314 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:41:48.840037 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:41:48.842302 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:41:48.846203 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:41:48.850462 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:41:48.853883 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:41:48.857666 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:41:48.880947 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:41:48.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.882343 systemd[1]: Starting ignition-mount.service... Feb 9 18:41:48.883522 systemd[1]: Starting sysroot-boot.service... Feb 9 18:41:48.887360 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:41:48.895912 ignition[828]: INFO : Ignition 2.14.0 Feb 9 18:41:48.896740 ignition[828]: INFO : Stage: mount Feb 9 18:41:48.897677 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:41:48.898428 systemd[1]: Finished sysroot-boot.service. Feb 9 18:41:48.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:48.899904 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:41:48.902046 ignition[828]: INFO : mount: mount passed Feb 9 18:41:48.902781 ignition[828]: INFO : Ignition finished successfully Feb 9 18:41:48.904061 systemd[1]: Finished ignition-mount.service. Feb 9 18:41:48.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:49.493485 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:41:49.499280 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 18:41:49.500553 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:41:49.500580 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:41:49.500592 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:41:49.504003 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:41:49.505507 systemd[1]: Starting ignition-files.service... Feb 9 18:41:49.518503 ignition[856]: INFO : Ignition 2.14.0 Feb 9 18:41:49.518503 ignition[856]: INFO : Stage: files Feb 9 18:41:49.519660 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:41:49.519660 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:41:49.521159 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:41:49.523935 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:41:49.523935 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:41:49.526359 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:41:49.526359 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:41:49.526359 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:41:49.526016 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 18:41:49.530819 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:41:49.530819 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:41:49.589011 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:41:49.626161 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:41:49.627640 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:41:49.627640 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:41:49.876513 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:41:50.009409 systemd-networkd[740]: eth0: Gained IPv6LL Feb 9 18:41:50.062875 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:41:50.064920 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:41:50.064920 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:41:50.064920 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:41:50.231834 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:41:50.351392 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:41:50.353595 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:41:50.353595 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:41:50.353595 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:41:50.353595 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:41:50.353595 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:41:50.400878 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:41:50.672971 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:41:50.672971 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:41:50.676451 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:41:50.676451 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:41:50.695571 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:41:50.955050 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:41:50.955050 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:41:50.958236 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:41:50.958236 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:41:50.980038 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 18:41:51.636400 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:41:51.638384 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:41:51.638384 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:41:51.638384 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:41:51.638384 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:41:51.638384 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 18:41:51.861737 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 18:41:51.924626 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:41:51.924626 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:41:51.927286 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(11): [started] processing unit "containerd.service" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(11): op(12): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(11): [finished] processing unit "containerd.service" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:41:51.927286 ignition[856]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(19): [started] processing unit "coreos-metadata.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(19): op(1a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(19): op(1a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(19): [finished] processing unit "coreos-metadata.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:41:51.949592 ignition[856]: INFO : files: op(1e): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:41:51.978110 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:41:51.978130 kernel: audit: type=1130 audit(1707504111.962:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.978140 kernel: audit: type=1130 audit(1707504111.970:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.978150 kernel: audit: type=1131 audit(1707504111.970:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.961272 systemd[1]: Finished ignition-files.service. Feb 9 18:41:51.981304 kernel: audit: type=1130 audit(1707504111.978:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.981346 ignition[856]: INFO : files: op(1e): op(1f): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:41:51.981346 ignition[856]: INFO : files: op(1e): op(1f): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:41:51.981346 ignition[856]: INFO : files: op(1e): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:41:51.981346 ignition[856]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:41:51.981346 ignition[856]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:41:51.981346 ignition[856]: INFO : files: files passed Feb 9 18:41:51.981346 ignition[856]: INFO : Ignition finished successfully Feb 9 18:41:51.963790 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:41:51.965266 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:41:51.990919 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:41:51.965905 systemd[1]: Starting ignition-quench.service... Feb 9 18:41:51.992698 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:41:51.970019 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:41:51.970098 systemd[1]: Finished ignition-quench.service. Feb 9 18:41:51.998882 kernel: audit: type=1130 audit(1707504111.994:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.998898 kernel: audit: type=1131 audit(1707504111.994:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:51.977154 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:41:51.979115 systemd[1]: Reached target ignition-complete.target. Feb 9 18:41:51.982664 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:41:51.994337 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:41:51.994424 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:41:51.995274 systemd[1]: Reached target initrd-fs.target. Feb 9 18:41:51.999397 systemd[1]: Reached target initrd.target. Feb 9 18:41:52.000427 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:41:52.001045 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:41:52.010855 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:41:52.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.012197 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:41:52.014748 kernel: audit: type=1130 audit(1707504112.011:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.019530 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:41:52.020315 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:41:52.021522 systemd[1]: Stopped target timers.target. Feb 9 18:41:52.022572 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:41:52.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.022666 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:41:52.026767 kernel: audit: type=1131 audit(1707504112.023:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.023683 systemd[1]: Stopped target initrd.target. Feb 9 18:41:52.026427 systemd[1]: Stopped target basic.target. Feb 9 18:41:52.027435 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:41:52.028500 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:41:52.029559 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:41:52.030720 systemd[1]: Stopped target remote-fs.target. Feb 9 18:41:52.031798 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:41:52.033111 systemd[1]: Stopped target sysinit.target. Feb 9 18:41:52.034129 systemd[1]: Stopped target local-fs.target. Feb 9 18:41:52.035207 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:41:52.036282 systemd[1]: Stopped target swap.target. Feb 9 18:41:52.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.037238 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:41:52.041526 kernel: audit: type=1131 audit(1707504112.037:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.037346 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:41:52.044290 kernel: audit: type=1131 audit(1707504112.041:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.038426 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:41:52.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.041021 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:41:52.041117 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:41:52.042304 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:41:52.042395 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:41:52.045167 systemd[1]: Stopped target paths.target. Feb 9 18:41:52.046020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:41:52.047290 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:41:52.048547 systemd[1]: Stopped target slices.target. Feb 9 18:41:52.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.049612 systemd[1]: Stopped target sockets.target. Feb 9 18:41:52.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.050600 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:41:52.050665 systemd[1]: Closed iscsid.socket. Feb 9 18:41:52.051869 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:41:52.051970 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:41:52.053167 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:41:52.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.053263 systemd[1]: Stopped ignition-files.service. Feb 9 18:41:52.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.054829 systemd[1]: Stopping ignition-mount.service... Feb 9 18:41:52.062332 ignition[896]: INFO : Ignition 2.14.0 Feb 9 18:41:52.062332 ignition[896]: INFO : Stage: umount Feb 9 18:41:52.062332 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:41:52.062332 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:41:52.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.055876 systemd[1]: Stopping iscsiuio.service... Feb 9 18:41:52.067632 ignition[896]: INFO : umount: umount passed Feb 9 18:41:52.067632 ignition[896]: INFO : Ignition finished successfully Feb 9 18:41:52.058140 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:41:52.058747 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:41:52.058861 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:41:52.060067 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:41:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.060155 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:41:52.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.062724 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:41:52.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.062811 systemd[1]: Stopped iscsiuio.service. Feb 9 18:41:52.064158 systemd[1]: Stopped target network.target. Feb 9 18:41:52.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.065773 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:41:52.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.065809 systemd[1]: Closed iscsiuio.socket. Feb 9 18:41:52.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.067336 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:41:52.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.068959 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:41:52.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.070719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:41:52.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.071181 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:41:52.071270 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:41:52.072610 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:41:52.072673 systemd[1]: Stopped ignition-mount.service. Feb 9 18:41:52.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.073854 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:41:52.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.073916 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:41:52.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.091000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:41:52.075404 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:41:52.075450 systemd[1]: Stopped ignition-disks.service. Feb 9 18:41:52.076358 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 9 18:41:52.093000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:41:52.077326 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:41:52.077364 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:41:52.078437 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:41:52.078474 systemd[1]: Stopped ignition-setup.service. Feb 9 18:41:52.079480 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:41:52.079515 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:41:52.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.080702 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:41:52.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.080777 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:41:52.082045 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:41:52.082126 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:41:52.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.083270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:41:52.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.083296 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:41:52.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.084830 systemd[1]: Stopping network-cleanup.service... Feb 9 18:41:52.085835 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:41:52.085889 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:41:52.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.087137 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:41:52.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.087175 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:41:52.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.088901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:41:52.088941 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:41:52.089807 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:41:52.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:52.093522 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:41:52.097001 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:41:52.097122 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:41:52.098489 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:41:52.098567 systemd[1]: Stopped network-cleanup.service. Feb 9 18:41:52.099412 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:41:52.099443 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:41:52.100488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:41:52.100518 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:41:52.101465 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:41:52.101503 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:41:52.120000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:41:52.120000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:41:52.102547 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:41:52.102580 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:41:52.103559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:41:52.122000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:41:52.122000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:41:52.122000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:41:52.103593 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:41:52.105341 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:41:52.106278 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:41:52.106331 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:41:52.108006 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:41:52.108043 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:41:52.108820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:41:52.108858 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:41:52.110821 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:41:52.111206 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:41:52.111294 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:41:52.112677 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:41:52.114509 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:41:52.120550 systemd[1]: Switching root. Feb 9 18:41:52.133538 iscsid[747]: iscsid shutting down. Feb 9 18:41:52.134005 systemd-journald[290]: Journal stopped Feb 9 18:41:54.224217 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 18:41:54.224289 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:41:54.224305 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:41:54.224318 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:41:54.224327 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:41:54.224337 kernel: SELinux: policy capability open_perms=1 Feb 9 18:41:54.224347 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:41:54.224356 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:41:54.224369 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:41:54.224378 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:41:54.224387 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:41:54.224396 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:41:54.224407 systemd[1]: Successfully loaded SELinux policy in 34.059ms. Feb 9 18:41:54.224423 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.582ms. Feb 9 18:41:54.224434 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:41:54.224446 systemd[1]: Detected virtualization kvm. Feb 9 18:41:54.224456 systemd[1]: Detected architecture arm64. Feb 9 18:41:54.224466 systemd[1]: Detected first boot. Feb 9 18:41:54.224477 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:41:54.224510 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:41:54.224523 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:41:54.224534 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:41:54.224545 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:41:54.224556 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:41:54.224567 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:41:54.224578 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:41:54.224590 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:41:54.224600 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:41:54.224610 systemd[1]: Created slice system-getty.slice. Feb 9 18:41:54.224620 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:41:54.224630 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:41:54.224640 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:41:54.224651 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:41:54.224661 systemd[1]: Created slice user.slice. Feb 9 18:41:54.224671 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:41:54.224682 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:41:54.224693 systemd[1]: Set up automount boot.automount. Feb 9 18:41:54.224703 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:41:54.224713 systemd[1]: Reached target integritysetup.target. Feb 9 18:41:54.224723 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:41:54.224734 systemd[1]: Reached target remote-fs.target. Feb 9 18:41:54.224745 systemd[1]: Reached target slices.target. Feb 9 18:41:54.224755 systemd[1]: Reached target swap.target. Feb 9 18:41:54.224767 systemd[1]: Reached target torcx.target. Feb 9 18:41:54.224777 systemd[1]: Reached target veritysetup.target. Feb 9 18:41:54.224787 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:41:54.224797 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:41:54.224808 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:41:54.224818 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:41:54.224828 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:41:54.224842 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:41:54.224852 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:41:54.224862 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:41:54.224874 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:41:54.224884 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:41:54.224894 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:41:54.224904 systemd[1]: Mounting media.mount... Feb 9 18:41:54.224914 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:41:54.224924 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:41:54.224934 systemd[1]: Mounting tmp.mount... Feb 9 18:41:54.224950 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:41:54.224961 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:41:54.224973 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:41:54.224984 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:41:54.224994 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:41:54.225004 systemd[1]: Starting modprobe@drm.service... Feb 9 18:41:54.225015 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:41:54.225025 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:41:54.225035 systemd[1]: Starting modprobe@loop.service... Feb 9 18:41:54.225045 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:41:54.225055 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 18:41:54.225067 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 18:41:54.225077 systemd[1]: Starting systemd-journald.service... Feb 9 18:41:54.225087 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:41:54.225097 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:41:54.225107 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:41:54.225117 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:41:54.225128 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:41:54.225137 kernel: fuse: init (API version 7.34) Feb 9 18:41:54.225147 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:41:54.225159 kernel: loop: module loaded Feb 9 18:41:54.225174 systemd[1]: Mounted media.mount. Feb 9 18:41:54.225187 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:41:54.225199 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:41:54.225208 systemd[1]: Mounted tmp.mount. Feb 9 18:41:54.225218 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:41:54.225229 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:41:54.225239 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:41:54.225256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:41:54.225269 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:41:54.225279 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:41:54.225290 systemd[1]: Finished modprobe@drm.service. Feb 9 18:41:54.225300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:41:54.225310 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:41:54.225320 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:41:54.225330 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:41:54.225340 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:41:54.225352 systemd-journald[1029]: Journal started Feb 9 18:41:54.225397 systemd-journald[1029]: Runtime Journal (/run/log/journal/4f5e3c6a78c64335ac0814be1c25ee5d) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:41:54.136000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:41:54.136000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:41:54.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.223000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:41:54.223000 audit[1029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd3f1a680 a2=4000 a3=1 items=0 ppid=1 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:41:54.223000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:41:54.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.226941 systemd[1]: Finished modprobe@loop.service. Feb 9 18:41:54.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.229955 systemd[1]: Started systemd-journald.service. Feb 9 18:41:54.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.229340 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:41:54.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.230570 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:41:54.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.231892 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:41:54.232922 systemd[1]: Reached target network-pre.target. Feb 9 18:41:54.234825 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:41:54.236487 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:41:54.237055 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:41:54.238656 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:41:54.240552 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:41:54.241384 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:41:54.242448 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:41:54.243178 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:41:54.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.248856 systemd-journald[1029]: Time spent on flushing to /var/log/journal/4f5e3c6a78c64335ac0814be1c25ee5d is 12.786ms for 965 entries. Feb 9 18:41:54.248856 systemd-journald[1029]: System Journal (/var/log/journal/4f5e3c6a78c64335ac0814be1c25ee5d) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:41:54.280027 systemd-journald[1029]: Received client request to flush runtime journal. Feb 9 18:41:54.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.244221 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:41:54.247675 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:41:54.248533 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:41:54.250176 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:41:54.252203 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:41:54.256812 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:41:54.258992 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:41:54.262567 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:41:54.277078 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:41:54.278149 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:41:54.280011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:41:54.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.281709 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:41:54.282967 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:41:54.287889 udevadm[1089]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:41:54.299332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:41:54.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.591416 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:41:54.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.593518 systemd[1]: Starting systemd-udevd.service... Feb 9 18:41:54.610284 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Feb 9 18:41:54.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.622962 systemd[1]: Started systemd-udevd.service. Feb 9 18:41:54.625243 systemd[1]: Starting systemd-networkd.service... Feb 9 18:41:54.635037 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:41:54.639758 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 18:41:54.686360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:41:54.697240 systemd[1]: Started systemd-userdbd.service. Feb 9 18:41:54.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.699649 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:41:54.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.701676 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:41:54.725577 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:41:54.745179 systemd-networkd[1101]: lo: Link UP Feb 9 18:41:54.745188 systemd-networkd[1101]: lo: Gained carrier Feb 9 18:41:54.745544 systemd-networkd[1101]: Enumeration completed Feb 9 18:41:54.745650 systemd[1]: Started systemd-networkd.service. Feb 9 18:41:54.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.746587 systemd-networkd[1101]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:41:54.747598 systemd-networkd[1101]: eth0: Link UP Feb 9 18:41:54.747609 systemd-networkd[1101]: eth0: Gained carrier Feb 9 18:41:54.766016 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:41:54.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.767000 systemd[1]: Reached target cryptsetup.target. Feb 9 18:41:54.768626 systemd[1]: Starting lvm2-activation.service... Feb 9 18:41:54.772120 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:41:54.776347 systemd-networkd[1101]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:41:54.804175 systemd[1]: Finished lvm2-activation.service. Feb 9 18:41:54.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.804900 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:41:54.805525 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:41:54.805551 systemd[1]: Reached target local-fs.target. Feb 9 18:41:54.806104 systemd[1]: Reached target machines.target. Feb 9 18:41:54.807768 systemd[1]: Starting ldconfig.service... Feb 9 18:41:54.808562 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:41:54.808616 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:41:54.809707 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:41:54.811495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:41:54.813533 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:41:54.814425 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:41:54.814467 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:41:54.815481 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:41:54.816595 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Feb 9 18:41:54.818978 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:41:54.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.828235 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:41:54.828469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:41:54.829659 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:41:54.831096 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:41:54.860777 systemd-fsck[1141]: fsck.fat 4.2 (2021-01-31) Feb 9 18:41:54.860777 systemd-fsck[1141]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:41:54.862133 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:41:54.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.864277 systemd[1]: Mounting boot.mount... Feb 9 18:41:54.937826 systemd[1]: Mounted boot.mount. Feb 9 18:41:54.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.939443 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:41:54.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:54.946316 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:41:55.005615 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:41:55.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.009557 systemd[1]: Finished ldconfig.service. Feb 9 18:41:55.028947 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:41:55.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.030842 systemd[1]: Starting audit-rules.service... Feb 9 18:41:55.032394 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:41:55.034038 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:41:55.036242 systemd[1]: Starting systemd-resolved.service... Feb 9 18:41:55.038466 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:41:55.040693 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:41:55.042328 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:41:55.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.043356 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:41:55.046000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.051624 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:41:55.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.052951 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:41:55.056079 systemd[1]: Starting systemd-update-done.service... Feb 9 18:41:55.061387 systemd[1]: Finished systemd-update-done.service. Feb 9 18:41:55.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:41:55.074000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:41:55.074000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd3a4da80 a2=420 a3=0 items=0 ppid=1151 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:41:55.074000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:41:55.076264 augenrules[1177]: No rules Feb 9 18:41:55.076942 systemd[1]: Finished audit-rules.service. Feb 9 18:41:55.106765 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:41:55.107835 systemd[1]: Reached target time-set.target. Feb 9 18:41:55.107977 systemd-timesyncd[1161]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:41:55.108181 systemd-resolved[1156]: Positive Trust Anchors: Feb 9 18:41:55.108194 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:41:55.108221 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:41:55.108516 systemd-timesyncd[1161]: Initial clock synchronization to Fri 2024-02-09 18:41:55.003097 UTC. Feb 9 18:41:55.117336 systemd-resolved[1156]: Defaulting to hostname 'linux'. Feb 9 18:41:55.118662 systemd[1]: Started systemd-resolved.service. Feb 9 18:41:55.119421 systemd[1]: Reached target network.target. Feb 9 18:41:55.120125 systemd[1]: Reached target nss-lookup.target. Feb 9 18:41:55.120874 systemd[1]: Reached target sysinit.target. Feb 9 18:41:55.121642 systemd[1]: Started motdgen.path. Feb 9 18:41:55.122307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:41:55.123404 systemd[1]: Started logrotate.timer. Feb 9 18:41:55.124152 systemd[1]: Started mdadm.timer. Feb 9 18:41:55.124771 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:41:55.125517 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:41:55.125546 systemd[1]: Reached target paths.target. Feb 9 18:41:55.126199 systemd[1]: Reached target timers.target. Feb 9 18:41:55.127154 systemd[1]: Listening on dbus.socket. Feb 9 18:41:55.128831 systemd[1]: Starting docker.socket... Feb 9 18:41:55.130340 systemd[1]: Listening on sshd.socket. Feb 9 18:41:55.131105 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:41:55.131428 systemd[1]: Listening on docker.socket. Feb 9 18:41:55.132149 systemd[1]: Reached target sockets.target. Feb 9 18:41:55.132891 systemd[1]: Reached target basic.target. Feb 9 18:41:55.133682 systemd[1]: System is tainted: cgroupsv1 Feb 9 18:41:55.133728 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:41:55.133745 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:41:55.134700 systemd[1]: Starting containerd.service... Feb 9 18:41:55.136336 systemd[1]: Starting dbus.service... Feb 9 18:41:55.137979 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:41:55.139848 systemd[1]: Starting extend-filesystems.service... Feb 9 18:41:55.140598 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:41:55.141843 systemd[1]: Starting motdgen.service... Feb 9 18:41:55.143777 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:41:55.145769 systemd[1]: Starting prepare-critools.service... Feb 9 18:41:55.147819 systemd[1]: Starting prepare-helm.service... Feb 9 18:41:55.149641 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:41:55.151716 systemd[1]: Starting sshd-keygen.service... Feb 9 18:41:55.154381 systemd[1]: Starting systemd-logind.service... Feb 9 18:41:55.155042 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:41:55.155145 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:41:55.156458 systemd[1]: Starting update-engine.service... Feb 9 18:41:55.158237 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:41:55.163119 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:41:55.164130 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:41:55.165853 jq[1189]: false Feb 9 18:41:55.176179 jq[1209]: true Feb 9 18:41:55.177800 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:41:55.178042 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:41:55.182184 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:41:55.182461 systemd[1]: Finished motdgen.service. Feb 9 18:41:55.186208 tar[1212]: crictl Feb 9 18:41:55.193516 jq[1221]: true Feb 9 18:41:55.198852 tar[1211]: ./ Feb 9 18:41:55.198852 tar[1211]: ./macvlan Feb 9 18:41:55.200778 tar[1213]: linux-arm64/helm Feb 9 18:41:55.200279 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:41:55.204114 extend-filesystems[1190]: Found vda Feb 9 18:41:55.204844 extend-filesystems[1190]: Found vda1 Feb 9 18:41:55.204844 extend-filesystems[1190]: Found vda2 Feb 9 18:41:55.204844 extend-filesystems[1190]: Found vda3 Feb 9 18:41:55.204844 extend-filesystems[1190]: Found usr Feb 9 18:41:55.204844 extend-filesystems[1190]: Found vda4 Feb 9 18:41:55.210908 extend-filesystems[1190]: Found vda6 Feb 9 18:41:55.211573 extend-filesystems[1190]: Found vda7 Feb 9 18:41:55.211573 extend-filesystems[1190]: Found vda9 Feb 9 18:41:55.211573 extend-filesystems[1190]: Checking size of /dev/vda9 Feb 9 18:41:55.214899 dbus-daemon[1188]: [system] SELinux support is enabled Feb 9 18:41:55.215067 systemd[1]: Started dbus.service. Feb 9 18:41:55.217368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:41:55.217389 systemd[1]: Reached target system-config.target. Feb 9 18:41:55.218112 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:41:55.218125 systemd[1]: Reached target user-config.target. Feb 9 18:41:55.234627 extend-filesystems[1190]: Resized partition /dev/vda9 Feb 9 18:41:55.237597 bash[1247]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:41:55.235842 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:41:55.239913 extend-filesystems[1254]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:41:55.255151 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:41:55.260789 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:41:55.261173 systemd-logind[1205]: New seat seat0. Feb 9 18:41:55.267407 systemd[1]: Started systemd-logind.service. Feb 9 18:41:55.273176 update_engine[1207]: I0209 18:41:55.272984 1207 main.cc:92] Flatcar Update Engine starting Feb 9 18:41:55.277069 systemd[1]: Started update-engine.service. Feb 9 18:41:55.280455 tar[1211]: ./static Feb 9 18:41:55.279582 systemd[1]: Started locksmithd.service. Feb 9 18:41:55.280824 update_engine[1207]: I0209 18:41:55.280802 1207 update_check_scheduler.cc:74] Next update check in 5m17s Feb 9 18:41:55.284265 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:41:55.294992 extend-filesystems[1254]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:41:55.294992 extend-filesystems[1254]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:41:55.294992 extend-filesystems[1254]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:41:55.297913 extend-filesystems[1190]: Resized filesystem in /dev/vda9 Feb 9 18:41:55.298826 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:41:55.299062 systemd[1]: Finished extend-filesystems.service. Feb 9 18:41:55.316948 tar[1211]: ./vlan Feb 9 18:41:55.347687 tar[1211]: ./portmap Feb 9 18:41:55.354432 locksmithd[1256]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:41:55.373754 tar[1211]: ./host-local Feb 9 18:41:55.375649 env[1219]: time="2024-02-09T18:41:55.375601960Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:41:55.396681 tar[1211]: ./vrf Feb 9 18:41:55.422094 tar[1211]: ./bridge Feb 9 18:41:55.427252 env[1219]: time="2024-02-09T18:41:55.427208600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:41:55.427391 env[1219]: time="2024-02-09T18:41:55.427370720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433298 env[1219]: time="2024-02-09T18:41:55.433239960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433298 env[1219]: time="2024-02-09T18:41:55.433296160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433579 env[1219]: time="2024-02-09T18:41:55.433555200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433579 env[1219]: time="2024-02-09T18:41:55.433577560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433642 env[1219]: time="2024-02-09T18:41:55.433592120Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:41:55.433642 env[1219]: time="2024-02-09T18:41:55.433602560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433684 env[1219]: time="2024-02-09T18:41:55.433674680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:41:55.433966 env[1219]: time="2024-02-09T18:41:55.433943040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:41:55.434114 env[1219]: time="2024-02-09T18:41:55.434092040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:41:55.434151 env[1219]: time="2024-02-09T18:41:55.434112480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:41:55.434186 env[1219]: time="2024-02-09T18:41:55.434166760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:41:55.434186 env[1219]: time="2024-02-09T18:41:55.434184400Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:41:55.439038 env[1219]: time="2024-02-09T18:41:55.439010640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:41:55.439076 env[1219]: time="2024-02-09T18:41:55.439047840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:41:55.439076 env[1219]: time="2024-02-09T18:41:55.439062960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:41:55.439114 env[1219]: time="2024-02-09T18:41:55.439095880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439172 env[1219]: time="2024-02-09T18:41:55.439157080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439200 env[1219]: time="2024-02-09T18:41:55.439174680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439200 env[1219]: time="2024-02-09T18:41:55.439188640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439569 env[1219]: time="2024-02-09T18:41:55.439548160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439599 env[1219]: time="2024-02-09T18:41:55.439573520Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439599 env[1219]: time="2024-02-09T18:41:55.439588920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439636 env[1219]: time="2024-02-09T18:41:55.439602000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.439636 env[1219]: time="2024-02-09T18:41:55.439616160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:41:55.439756 env[1219]: time="2024-02-09T18:41:55.439736840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:41:55.439835 env[1219]: time="2024-02-09T18:41:55.439817600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:41:55.440155 env[1219]: time="2024-02-09T18:41:55.440133240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:41:55.440185 env[1219]: time="2024-02-09T18:41:55.440166920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440206 env[1219]: time="2024-02-09T18:41:55.440182960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:41:55.440315 env[1219]: time="2024-02-09T18:41:55.440300080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440344 env[1219]: time="2024-02-09T18:41:55.440317840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440344 env[1219]: time="2024-02-09T18:41:55.440330720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440344 env[1219]: time="2024-02-09T18:41:55.440341400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440410 env[1219]: time="2024-02-09T18:41:55.440354080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440410 env[1219]: time="2024-02-09T18:41:55.440366760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440410 env[1219]: time="2024-02-09T18:41:55.440377920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440410 env[1219]: time="2024-02-09T18:41:55.440395320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440410 env[1219]: time="2024-02-09T18:41:55.440407840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:41:55.440549 env[1219]: time="2024-02-09T18:41:55.440527240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440549 env[1219]: time="2024-02-09T18:41:55.440541920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440588 env[1219]: time="2024-02-09T18:41:55.440553880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440588 env[1219]: time="2024-02-09T18:41:55.440565400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:41:55.440588 env[1219]: time="2024-02-09T18:41:55.440580080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:41:55.440647 env[1219]: time="2024-02-09T18:41:55.440591240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:41:55.440647 env[1219]: time="2024-02-09T18:41:55.440607600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:41:55.440647 env[1219]: time="2024-02-09T18:41:55.440640040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:41:55.440898 env[1219]: time="2024-02-09T18:41:55.440840960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.440905680Z" level=info msg="Connect containerd service" Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.440955520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.441685840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.442057200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.442099960Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.442141640Z" level=info msg="containerd successfully booted in 0.067184s" Feb 9 18:41:55.444590 env[1219]: time="2024-02-09T18:41:55.443205000Z" level=info msg="Start subscribing containerd event" Feb 9 18:41:55.443466 systemd[1]: Started containerd.service. Feb 9 18:41:55.455288 tar[1211]: ./tuning Feb 9 18:41:55.462027 env[1219]: time="2024-02-09T18:41:55.461986120Z" level=info msg="Start recovering state" Feb 9 18:41:55.462124 env[1219]: time="2024-02-09T18:41:55.462108040Z" level=info msg="Start event monitor" Feb 9 18:41:55.462154 env[1219]: time="2024-02-09T18:41:55.462145640Z" level=info msg="Start snapshots syncer" Feb 9 18:41:55.462175 env[1219]: time="2024-02-09T18:41:55.462158400Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:41:55.462175 env[1219]: time="2024-02-09T18:41:55.462166680Z" level=info msg="Start streaming server" Feb 9 18:41:55.483438 tar[1211]: ./firewall Feb 9 18:41:55.512738 tar[1211]: ./host-device Feb 9 18:41:55.538838 tar[1211]: ./sbr Feb 9 18:41:55.562730 tar[1211]: ./loopback Feb 9 18:41:55.585660 tar[1211]: ./dhcp Feb 9 18:41:55.655663 tar[1211]: ./ptp Feb 9 18:41:55.683414 tar[1211]: ./ipvlan Feb 9 18:41:55.710472 tar[1211]: ./bandwidth Feb 9 18:41:55.740626 tar[1213]: linux-arm64/LICENSE Feb 9 18:41:55.740704 tar[1213]: linux-arm64/README.md Feb 9 18:41:55.747788 systemd[1]: Finished prepare-helm.service. Feb 9 18:41:55.752340 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:41:55.806461 systemd[1]: Finished prepare-critools.service. Feb 9 18:41:55.961369 systemd-networkd[1101]: eth0: Gained IPv6LL Feb 9 18:41:58.980819 sshd_keygen[1224]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:41:58.998346 systemd[1]: Finished sshd-keygen.service. Feb 9 18:41:59.000682 systemd[1]: Starting issuegen.service... Feb 9 18:41:59.005093 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:41:59.005311 systemd[1]: Finished issuegen.service. Feb 9 18:41:59.007506 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:41:59.014619 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:41:59.016634 systemd[1]: Started getty@tty1.service. Feb 9 18:41:59.018461 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:41:59.019418 systemd[1]: Reached target getty.target. Feb 9 18:41:59.020029 systemd[1]: Reached target multi-user.target. Feb 9 18:41:59.021887 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:41:59.027430 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:41:59.027618 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:41:59.028626 systemd[1]: Startup finished in 6.169s (kernel) + 6.849s (userspace) = 13.018s. Feb 9 18:41:59.558274 systemd[1]: Created slice system-sshd.slice. Feb 9 18:41:59.559367 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:59394.service. Feb 9 18:41:59.603522 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 59394 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:59.605337 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:59.612485 systemd[1]: Created slice user-500.slice. Feb 9 18:41:59.613347 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:41:59.616553 systemd-logind[1205]: New session 1 of user core. Feb 9 18:41:59.620643 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:41:59.621703 systemd[1]: Starting user@500.service... Feb 9 18:41:59.624372 (systemd)[1302]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:59.678939 systemd[1302]: Queued start job for default target default.target. Feb 9 18:41:59.679136 systemd[1302]: Reached target paths.target. Feb 9 18:41:59.679151 systemd[1302]: Reached target sockets.target. Feb 9 18:41:59.679162 systemd[1302]: Reached target timers.target. Feb 9 18:41:59.679183 systemd[1302]: Reached target basic.target. Feb 9 18:41:59.679223 systemd[1302]: Reached target default.target. Feb 9 18:41:59.679260 systemd[1302]: Startup finished in 49ms. Feb 9 18:41:59.679318 systemd[1]: Started user@500.service. Feb 9 18:41:59.680152 systemd[1]: Started session-1.scope. Feb 9 18:41:59.728619 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:59396.service. Feb 9 18:41:59.773091 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:59.774264 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:59.777661 systemd-logind[1205]: New session 2 of user core. Feb 9 18:41:59.778419 systemd[1]: Started session-2.scope. Feb 9 18:41:59.830175 sshd[1312]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:59.832209 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:59410.service. Feb 9 18:41:59.833313 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:41:59.833462 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:59396.service: Deactivated successfully. Feb 9 18:41:59.834114 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:41:59.834481 systemd-logind[1205]: Removed session 2. Feb 9 18:41:59.871183 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 59410 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:59.872233 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:59.875295 systemd-logind[1205]: New session 3 of user core. Feb 9 18:41:59.876033 systemd[1]: Started session-3.scope. Feb 9 18:41:59.923908 sshd[1317]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:59.925844 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:59420.service. Feb 9 18:41:59.927001 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:59410.service: Deactivated successfully. Feb 9 18:41:59.927863 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:41:59.928415 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:41:59.929056 systemd-logind[1205]: Removed session 3. Feb 9 18:41:59.964924 sshd[1324]: Accepted publickey for core from 10.0.0.1 port 59420 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:59.965985 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:59.968827 systemd-logind[1205]: New session 4 of user core. Feb 9 18:41:59.970706 systemd[1]: Started session-4.scope. Feb 9 18:42:00.022940 sshd[1324]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:00.025563 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:59420.service: Deactivated successfully. Feb 9 18:42:00.026494 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:42:00.027981 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:59434.service. Feb 9 18:42:00.028851 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:42:00.029666 systemd-logind[1205]: Removed session 4. Feb 9 18:42:00.066894 sshd[1333]: Accepted publickey for core from 10.0.0.1 port 59434 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:00.067940 sshd[1333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:00.071660 systemd[1]: Started session-5.scope. Feb 9 18:42:00.071928 systemd-logind[1205]: New session 5 of user core. Feb 9 18:42:00.128565 sudo[1337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:42:00.128759 sudo[1337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:42:00.713139 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:42:00.718525 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:42:00.718798 systemd[1]: Reached target network-online.target. Feb 9 18:42:00.720046 systemd[1]: Starting docker.service... Feb 9 18:42:00.800385 env[1356]: time="2024-02-09T18:42:00.800330630Z" level=info msg="Starting up" Feb 9 18:42:00.801692 env[1356]: time="2024-02-09T18:42:00.801651103Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:42:00.801692 env[1356]: time="2024-02-09T18:42:00.801673251Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:42:00.801692 env[1356]: time="2024-02-09T18:42:00.801692701Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:42:00.801788 env[1356]: time="2024-02-09T18:42:00.801702545Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:42:00.803557 env[1356]: time="2024-02-09T18:42:00.803533154Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:42:00.803557 env[1356]: time="2024-02-09T18:42:00.803553239Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:42:00.803642 env[1356]: time="2024-02-09T18:42:00.803567647Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:42:00.803642 env[1356]: time="2024-02-09T18:42:00.803576856Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:42:00.809947 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1791783505-merged.mount: Deactivated successfully. Feb 9 18:42:01.010729 env[1356]: time="2024-02-09T18:42:01.010327966Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 18:42:01.010729 env[1356]: time="2024-02-09T18:42:01.010354785Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 18:42:01.010729 env[1356]: time="2024-02-09T18:42:01.010509698Z" level=info msg="Loading containers: start." Feb 9 18:42:01.119364 kernel: Initializing XFRM netlink socket Feb 9 18:42:01.142366 env[1356]: time="2024-02-09T18:42:01.142337596Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:42:01.193538 systemd-networkd[1101]: docker0: Link UP Feb 9 18:42:01.200909 env[1356]: time="2024-02-09T18:42:01.200868953Z" level=info msg="Loading containers: done." Feb 9 18:42:01.221578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4165737137-merged.mount: Deactivated successfully. Feb 9 18:42:01.225569 env[1356]: time="2024-02-09T18:42:01.225526729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:42:01.225707 env[1356]: time="2024-02-09T18:42:01.225690303Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:42:01.225804 env[1356]: time="2024-02-09T18:42:01.225779699Z" level=info msg="Daemon has completed initialization" Feb 9 18:42:01.238375 systemd[1]: Started docker.service. Feb 9 18:42:01.244419 env[1356]: time="2024-02-09T18:42:01.244370950Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:42:01.259493 systemd[1]: Reloading. Feb 9 18:42:01.300400 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2024-02-09T18:42:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:42:01.300426 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2024-02-09T18:42:01Z" level=info msg="torcx already run" Feb 9 18:42:01.359745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:42:01.359762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:42:01.377010 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:42:01.431711 systemd[1]: Started kubelet.service. Feb 9 18:42:01.586643 kubelet[1542]: E0209 18:42:01.586512 1542 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:42:01.589898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:42:01.590061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:42:01.777950 env[1219]: time="2024-02-09T18:42:01.777907516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:42:02.490927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452324422.mount: Deactivated successfully. Feb 9 18:42:03.917596 env[1219]: time="2024-02-09T18:42:03.917538609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:03.918889 env[1219]: time="2024-02-09T18:42:03.918860381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:03.920439 env[1219]: time="2024-02-09T18:42:03.920409858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:03.922768 env[1219]: time="2024-02-09T18:42:03.922735126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:03.925384 env[1219]: time="2024-02-09T18:42:03.925352447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:42:03.934526 env[1219]: time="2024-02-09T18:42:03.934496851Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:42:05.670048 env[1219]: time="2024-02-09T18:42:05.669998318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:05.671741 env[1219]: time="2024-02-09T18:42:05.671707048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:05.673472 env[1219]: time="2024-02-09T18:42:05.673446577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:05.677119 env[1219]: time="2024-02-09T18:42:05.677087236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:05.678690 env[1219]: time="2024-02-09T18:42:05.678657354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:42:05.687340 env[1219]: time="2024-02-09T18:42:05.687301569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:42:07.213519 env[1219]: time="2024-02-09T18:42:07.213468691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:07.215120 env[1219]: time="2024-02-09T18:42:07.215081259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:07.217833 env[1219]: time="2024-02-09T18:42:07.217798609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:07.219530 env[1219]: time="2024-02-09T18:42:07.219494445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:07.220222 env[1219]: time="2024-02-09T18:42:07.220187750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:42:07.229571 env[1219]: time="2024-02-09T18:42:07.229539694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:42:08.269311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903178303.mount: Deactivated successfully. Feb 9 18:42:08.598732 env[1219]: time="2024-02-09T18:42:08.598627941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:08.600147 env[1219]: time="2024-02-09T18:42:08.600095941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:08.601326 env[1219]: time="2024-02-09T18:42:08.601301714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:08.602604 env[1219]: time="2024-02-09T18:42:08.602570242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:08.603097 env[1219]: time="2024-02-09T18:42:08.603071158Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:42:08.611605 env[1219]: time="2024-02-09T18:42:08.611578874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:42:09.077846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744271306.mount: Deactivated successfully. Feb 9 18:42:09.081888 env[1219]: time="2024-02-09T18:42:09.081847201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:09.083128 env[1219]: time="2024-02-09T18:42:09.083087015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:09.084613 env[1219]: time="2024-02-09T18:42:09.084588824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:09.085782 env[1219]: time="2024-02-09T18:42:09.085756963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:09.086387 env[1219]: time="2024-02-09T18:42:09.086359371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:42:09.095015 env[1219]: time="2024-02-09T18:42:09.094988264Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:42:09.871167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742229090.mount: Deactivated successfully. Feb 9 18:42:11.748717 env[1219]: time="2024-02-09T18:42:11.748663448Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:11.749867 env[1219]: time="2024-02-09T18:42:11.749830144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:11.751961 env[1219]: time="2024-02-09T18:42:11.751926516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:11.753878 env[1219]: time="2024-02-09T18:42:11.753853148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:11.754499 env[1219]: time="2024-02-09T18:42:11.754472692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:42:11.762882 env[1219]: time="2024-02-09T18:42:11.762856182Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:42:11.840837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:42:11.841011 systemd[1]: Stopped kubelet.service. Feb 9 18:42:11.842549 systemd[1]: Started kubelet.service. Feb 9 18:42:11.882905 kubelet[1605]: E0209 18:42:11.882845 1605 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:42:11.885719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:42:11.885868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:42:12.329185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1610151076.mount: Deactivated successfully. Feb 9 18:42:12.781887 env[1219]: time="2024-02-09T18:42:12.781677506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:12.783338 env[1219]: time="2024-02-09T18:42:12.783294044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:12.785334 env[1219]: time="2024-02-09T18:42:12.785299461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:12.787184 env[1219]: time="2024-02-09T18:42:12.787148759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:12.787756 env[1219]: time="2024-02-09T18:42:12.787730499Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:42:17.700434 systemd[1]: Stopped kubelet.service. Feb 9 18:42:17.712600 systemd[1]: Reloading. Feb 9 18:42:17.758167 /usr/lib/systemd/system-generators/torcx-generator[1706]: time="2024-02-09T18:42:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:42:17.758197 /usr/lib/systemd/system-generators/torcx-generator[1706]: time="2024-02-09T18:42:17Z" level=info msg="torcx already run" Feb 9 18:42:17.818958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:42:17.818975 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:42:17.835856 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:42:17.894575 systemd[1]: Started kubelet.service. Feb 9 18:42:17.935153 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:42:17.935153 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:42:17.935545 kubelet[1750]: I0209 18:42:17.935326 1750 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:42:17.936781 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:42:17.936781 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:42:18.647859 kubelet[1750]: I0209 18:42:18.647822 1750 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:42:18.647859 kubelet[1750]: I0209 18:42:18.647852 1750 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:42:18.648062 kubelet[1750]: I0209 18:42:18.648047 1750 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:42:18.652338 kubelet[1750]: I0209 18:42:18.652319 1750 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:42:18.652632 kubelet[1750]: E0209 18:42:18.652555 1750 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.654176 kubelet[1750]: W0209 18:42:18.654146 1750 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:42:18.654933 kubelet[1750]: I0209 18:42:18.654914 1750 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:42:18.655413 kubelet[1750]: I0209 18:42:18.655393 1750 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:42:18.655481 kubelet[1750]: I0209 18:42:18.655466 1750 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:42:18.655556 kubelet[1750]: I0209 18:42:18.655488 1750 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:42:18.655556 kubelet[1750]: I0209 18:42:18.655500 1750 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:42:18.655673 kubelet[1750]: I0209 18:42:18.655650 1750 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:42:18.660779 kubelet[1750]: I0209 18:42:18.660754 1750 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:42:18.660779 kubelet[1750]: I0209 18:42:18.660775 1750 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:42:18.660884 kubelet[1750]: I0209 18:42:18.660800 1750 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:42:18.660884 kubelet[1750]: I0209 18:42:18.660810 1750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:42:18.661926 kubelet[1750]: I0209 18:42:18.661904 1750 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:42:18.662410 kubelet[1750]: W0209 18:42:18.662366 1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.662523 kubelet[1750]: E0209 18:42:18.662509 1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.662795 kubelet[1750]: W0209 18:42:18.662776 1750 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:42:18.663107 kubelet[1750]: W0209 18:42:18.663076 1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.663197 kubelet[1750]: E0209 18:42:18.663186 1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.663283 kubelet[1750]: I0209 18:42:18.663265 1750 server.go:1186] "Started kubelet" Feb 9 18:42:18.664497 kubelet[1750]: E0209 18:42:18.664409 1750 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b245f7a58801dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 18, 663231964, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 18, 663231964, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.121:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.121:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:42:18.664763 kubelet[1750]: E0209 18:42:18.664748 1750 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:42:18.664842 kubelet[1750]: E0209 18:42:18.664833 1750 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:42:18.665154 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:42:18.665312 kubelet[1750]: I0209 18:42:18.665291 1750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:42:18.665745 kubelet[1750]: I0209 18:42:18.665732 1750 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:42:18.666377 kubelet[1750]: I0209 18:42:18.666360 1750 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:42:18.667329 kubelet[1750]: E0209 18:42:18.667285 1750 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:42:18.667401 kubelet[1750]: I0209 18:42:18.667379 1750 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:42:18.667456 kubelet[1750]: I0209 18:42:18.667439 1750 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:42:18.667948 kubelet[1750]: W0209 18:42:18.667885 1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.667948 kubelet[1750]: E0209 18:42:18.667925 1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.668512 kubelet[1750]: E0209 18:42:18.668481 1750 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.700690 kubelet[1750]: I0209 18:42:18.700671 1750 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:42:18.700690 kubelet[1750]: I0209 18:42:18.700689 1750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:42:18.700804 kubelet[1750]: I0209 18:42:18.700706 1750 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:42:18.702308 kubelet[1750]: I0209 18:42:18.702282 1750 policy_none.go:49] "None policy: Start" Feb 9 18:42:18.702833 kubelet[1750]: I0209 18:42:18.702789 1750 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:42:18.702833 kubelet[1750]: I0209 18:42:18.702832 1750 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:42:18.703216 kubelet[1750]: I0209 18:42:18.703183 1750 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:42:18.707545 kubelet[1750]: I0209 18:42:18.707521 1750 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:42:18.707715 kubelet[1750]: I0209 18:42:18.707689 1750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:42:18.709011 kubelet[1750]: E0209 18:42:18.708986 1750 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:42:18.724550 kubelet[1750]: I0209 18:42:18.724534 1750 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:42:18.724657 kubelet[1750]: I0209 18:42:18.724646 1750 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:42:18.724856 kubelet[1750]: I0209 18:42:18.724844 1750 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:42:18.725329 kubelet[1750]: E0209 18:42:18.725317 1750 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:42:18.725674 kubelet[1750]: W0209 18:42:18.725507 1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.725819 kubelet[1750]: E0209 18:42:18.725808 1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.768868 kubelet[1750]: I0209 18:42:18.768818 1750 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:42:18.769241 kubelet[1750]: E0209 18:42:18.769223 1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 9 18:42:18.826424 kubelet[1750]: I0209 18:42:18.826378 1750 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:18.827295 kubelet[1750]: I0209 18:42:18.827274 1750 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:18.827969 kubelet[1750]: I0209 18:42:18.827948 1750 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:18.829848 kubelet[1750]: I0209 18:42:18.829818 1750 status_manager.go:698] "Failed to get status for pod" podUID=de249137550db01a22d1ed135820fd0f pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.121:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.121:6443: connect: connection refused" Feb 9 18:42:18.830006 kubelet[1750]: I0209 18:42:18.829992 1750 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.121:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.121:6443: connect: connection refused" Feb 9 18:42:18.834081 kubelet[1750]: I0209 18:42:18.834044 1750 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.121:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.121:6443: connect: connection refused" Feb 9 18:42:18.869196 kubelet[1750]: E0209 18:42:18.869164 1750 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:18.969169 kubelet[1750]: I0209 18:42:18.968477 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:42:18.969169 kubelet[1750]: I0209 18:42:18.968534 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de249137550db01a22d1ed135820fd0f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de249137550db01a22d1ed135820fd0f\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:18.969169 kubelet[1750]: I0209 18:42:18.968651 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de249137550db01a22d1ed135820fd0f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de249137550db01a22d1ed135820fd0f\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:18.969169 kubelet[1750]: I0209 18:42:18.968721 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:18.969169 kubelet[1750]: I0209 18:42:18.968744 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:18.969582 kubelet[1750]: I0209 18:42:18.968774 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:18.969582 kubelet[1750]: I0209 18:42:18.968794 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de249137550db01a22d1ed135820fd0f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de249137550db01a22d1ed135820fd0f\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:18.969582 kubelet[1750]: I0209 18:42:18.968814 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:18.969582 kubelet[1750]: I0209 18:42:18.968839 1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:18.970596 kubelet[1750]: I0209 18:42:18.970576 1750 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:42:18.970866 kubelet[1750]: E0209 18:42:18.970853 1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 9 18:42:19.131796 kubelet[1750]: E0209 18:42:19.131772 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.132395 env[1219]: time="2024-02-09T18:42:19.132353765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de249137550db01a22d1ed135820fd0f,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:19.134955 kubelet[1750]: E0209 18:42:19.134936 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.135149 kubelet[1750]: E0209 18:42:19.134949 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.135563 env[1219]: time="2024-02-09T18:42:19.135532036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:19.136046 env[1219]: time="2024-02-09T18:42:19.135929115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:19.270277 kubelet[1750]: E0209 18:42:19.270145 1750 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:19.372571 kubelet[1750]: I0209 18:42:19.372536 1750 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:42:19.372810 kubelet[1750]: E0209 18:42:19.372796 1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 9 18:42:19.577310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365544634.mount: Deactivated successfully. Feb 9 18:42:19.580817 env[1219]: time="2024-02-09T18:42:19.580772326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.583097 env[1219]: time="2024-02-09T18:42:19.583050703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.583897 env[1219]: time="2024-02-09T18:42:19.583876402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.584869 env[1219]: time="2024-02-09T18:42:19.584845693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.586415 env[1219]: time="2024-02-09T18:42:19.586391435Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.588067 env[1219]: time="2024-02-09T18:42:19.588041593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.589480 env[1219]: time="2024-02-09T18:42:19.589453735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.591620 env[1219]: time="2024-02-09T18:42:19.591591238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.593350 env[1219]: time="2024-02-09T18:42:19.593323226Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.595736 env[1219]: time="2024-02-09T18:42:19.595707978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.596541 env[1219]: time="2024-02-09T18:42:19.596520885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.597558 env[1219]: time="2024-02-09T18:42:19.597527994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:19.631388 env[1219]: time="2024-02-09T18:42:19.631326674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:19.631552 env[1219]: time="2024-02-09T18:42:19.631365850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:19.631552 env[1219]: time="2024-02-09T18:42:19.631376564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:19.631965 env[1219]: time="2024-02-09T18:42:19.631606184Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4019dceacde6303bc1bf0272b64685ad9270545828984346ae009f8c53952956 pid=1844 runtime=io.containerd.runc.v2 Feb 9 18:42:19.632076 env[1219]: time="2024-02-09T18:42:19.631669346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:19.632076 env[1219]: time="2024-02-09T18:42:19.631701327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:19.632076 env[1219]: time="2024-02-09T18:42:19.631711680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:19.632076 env[1219]: time="2024-02-09T18:42:19.631843001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20222bdf0d1723f904a5fda734cf38ad9b067c1b8bf45cb28cbb453c67385932 pid=1843 runtime=io.containerd.runc.v2 Feb 9 18:42:19.632262 env[1219]: time="2024-02-09T18:42:19.632197146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:19.632398 env[1219]: time="2024-02-09T18:42:19.632261906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:19.632398 env[1219]: time="2024-02-09T18:42:19.632276218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:19.632575 env[1219]: time="2024-02-09T18:42:19.632423968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99d18a44e8fb20755eb43977508a928cc69e3c7ca332b620d244caf55f729d9c pid=1850 runtime=io.containerd.runc.v2 Feb 9 18:42:19.678120 kubelet[1750]: W0209 18:42:19.678027 1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:19.678120 kubelet[1750]: E0209 18:42:19.678100 1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:19.700376 env[1219]: time="2024-02-09T18:42:19.700329222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"20222bdf0d1723f904a5fda734cf38ad9b067c1b8bf45cb28cbb453c67385932\"" Feb 9 18:42:19.701437 kubelet[1750]: E0209 18:42:19.701194 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.703757 kubelet[1750]: W0209 18:42:19.703710 1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:19.703855 kubelet[1750]: E0209 18:42:19.703774 1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 9 18:42:19.704233 env[1219]: time="2024-02-09T18:42:19.704199792Z" level=info msg="CreateContainer within sandbox \"20222bdf0d1723f904a5fda734cf38ad9b067c1b8bf45cb28cbb453c67385932\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:42:19.706293 env[1219]: time="2024-02-09T18:42:19.706232758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de249137550db01a22d1ed135820fd0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"99d18a44e8fb20755eb43977508a928cc69e3c7ca332b620d244caf55f729d9c\"" Feb 9 18:42:19.706983 kubelet[1750]: E0209 18:42:19.706826 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.709664 env[1219]: time="2024-02-09T18:42:19.709616464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4019dceacde6303bc1bf0272b64685ad9270545828984346ae009f8c53952956\"" Feb 9 18:42:19.710201 kubelet[1750]: E0209 18:42:19.710168 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.710981 env[1219]: time="2024-02-09T18:42:19.710598427Z" level=info msg="CreateContainer within sandbox \"99d18a44e8fb20755eb43977508a928cc69e3c7ca332b620d244caf55f729d9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:42:19.711649 env[1219]: time="2024-02-09T18:42:19.711614690Z" level=info msg="CreateContainer within sandbox \"4019dceacde6303bc1bf0272b64685ad9270545828984346ae009f8c53952956\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:42:19.723151 env[1219]: time="2024-02-09T18:42:19.723109472Z" level=info msg="CreateContainer within sandbox \"20222bdf0d1723f904a5fda734cf38ad9b067c1b8bf45cb28cbb453c67385932\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a6d8b77a7173b889102668843b8f14a4430bda837b67cee84b8508b7bd10cb81\"" Feb 9 18:42:19.723908 env[1219]: time="2024-02-09T18:42:19.723884721Z" level=info msg="StartContainer for \"a6d8b77a7173b889102668843b8f14a4430bda837b67cee84b8508b7bd10cb81\"" Feb 9 18:42:19.726347 env[1219]: time="2024-02-09T18:42:19.726301014Z" level=info msg="CreateContainer within sandbox \"99d18a44e8fb20755eb43977508a928cc69e3c7ca332b620d244caf55f729d9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b6318652ce6a6fb25b98bf5d4aed4faa624a092079442cf5a9fa896947f84900\"" Feb 9 18:42:19.727122 env[1219]: time="2024-02-09T18:42:19.727017019Z" level=info msg="StartContainer for \"b6318652ce6a6fb25b98bf5d4aed4faa624a092079442cf5a9fa896947f84900\"" Feb 9 18:42:19.728976 env[1219]: time="2024-02-09T18:42:19.728843790Z" level=info msg="CreateContainer within sandbox \"4019dceacde6303bc1bf0272b64685ad9270545828984346ae009f8c53952956\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2dfdad16462867983e91f1392eed115aca2cb51f6710bdbf6d36a1e274266669\"" Feb 9 18:42:19.729457 env[1219]: time="2024-02-09T18:42:19.729433672Z" level=info msg="StartContainer for \"2dfdad16462867983e91f1392eed115aca2cb51f6710bdbf6d36a1e274266669\"" Feb 9 18:42:19.801694 env[1219]: time="2024-02-09T18:42:19.800735304Z" level=info msg="StartContainer for \"a6d8b77a7173b889102668843b8f14a4430bda837b67cee84b8508b7bd10cb81\" returns successfully" Feb 9 18:42:19.841816 env[1219]: time="2024-02-09T18:42:19.841718503Z" level=info msg="StartContainer for \"2dfdad16462867983e91f1392eed115aca2cb51f6710bdbf6d36a1e274266669\" returns successfully" Feb 9 18:42:19.843050 env[1219]: time="2024-02-09T18:42:19.843012597Z" level=info msg="StartContainer for \"b6318652ce6a6fb25b98bf5d4aed4faa624a092079442cf5a9fa896947f84900\" returns successfully" Feb 9 18:42:20.174312 kubelet[1750]: I0209 18:42:20.173732 1750 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:42:20.734816 kubelet[1750]: E0209 18:42:20.734778 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:20.736551 kubelet[1750]: E0209 18:42:20.736530 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:20.737301 kubelet[1750]: E0209 18:42:20.737242 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:21.739971 kubelet[1750]: E0209 18:42:21.739935 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:21.740543 kubelet[1750]: E0209 18:42:21.740508 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:21.740953 kubelet[1750]: E0209 18:42:21.740929 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:22.050959 kubelet[1750]: I0209 18:42:22.050865 1750 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:42:22.069419 kubelet[1750]: E0209 18:42:22.069380 1750 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:42:22.170267 kubelet[1750]: E0209 18:42:22.170208 1750 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:42:22.270725 kubelet[1750]: E0209 18:42:22.270681 1750 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:42:22.664980 kubelet[1750]: I0209 18:42:22.664932 1750 apiserver.go:52] "Watching apiserver" Feb 9 18:42:22.667579 kubelet[1750]: I0209 18:42:22.667561 1750 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:42:22.699994 kubelet[1750]: I0209 18:42:22.699972 1750 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:42:22.743679 kubelet[1750]: E0209 18:42:22.743649 1750 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:22.744082 kubelet[1750]: E0209 18:42:22.744066 1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:24.474622 systemd[1]: Reloading. Feb 9 18:42:24.527050 /usr/lib/systemd/system-generators/torcx-generator[2083]: time="2024-02-09T18:42:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:42:24.527399 /usr/lib/systemd/system-generators/torcx-generator[2083]: time="2024-02-09T18:42:24Z" level=info msg="torcx already run" Feb 9 18:42:24.582217 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:42:24.582239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:42:24.599276 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:42:24.664916 systemd[1]: Stopping kubelet.service... Feb 9 18:42:24.684718 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:42:24.685027 systemd[1]: Stopped kubelet.service. Feb 9 18:42:24.687129 systemd[1]: Started kubelet.service. Feb 9 18:42:24.744065 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:42:24.744065 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:42:24.744065 kubelet[2127]: I0209 18:42:24.743711 2127 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:42:24.744906 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:42:24.744906 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:42:24.747595 kubelet[2127]: I0209 18:42:24.747573 2127 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:42:24.747680 kubelet[2127]: I0209 18:42:24.747669 2127 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:42:24.747909 kubelet[2127]: I0209 18:42:24.747893 2127 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:42:24.749111 kubelet[2127]: I0209 18:42:24.749086 2127 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:42:24.750239 kubelet[2127]: I0209 18:42:24.750204 2127 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:42:24.751693 kubelet[2127]: W0209 18:42:24.751672 2127 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:42:24.752539 kubelet[2127]: I0209 18:42:24.752522 2127 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:42:24.753030 kubelet[2127]: I0209 18:42:24.753013 2127 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:42:24.753165 kubelet[2127]: I0209 18:42:24.753153 2127 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:42:24.753323 kubelet[2127]: I0209 18:42:24.753304 2127 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:42:24.753407 kubelet[2127]: I0209 18:42:24.753396 2127 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:42:24.753490 kubelet[2127]: I0209 18:42:24.753479 2127 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:42:24.756357 kubelet[2127]: I0209 18:42:24.756326 2127 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:42:24.756357 kubelet[2127]: I0209 18:42:24.756353 2127 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:42:24.756440 kubelet[2127]: I0209 18:42:24.756375 2127 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:42:24.756440 kubelet[2127]: I0209 18:42:24.756386 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:42:24.762216 kubelet[2127]: I0209 18:42:24.762192 2127 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:42:24.762683 kubelet[2127]: I0209 18:42:24.762656 2127 server.go:1186] "Started kubelet" Feb 9 18:42:24.762865 kubelet[2127]: I0209 18:42:24.762842 2127 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:42:24.763892 kubelet[2127]: E0209 18:42:24.763716 2127 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:42:24.764306 kubelet[2127]: I0209 18:42:24.764284 2127 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:42:24.765195 kubelet[2127]: I0209 18:42:24.765174 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:42:24.766203 kubelet[2127]: E0209 18:42:24.765206 2127 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:42:24.772339 kubelet[2127]: I0209 18:42:24.772320 2127 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:42:24.772405 kubelet[2127]: I0209 18:42:24.772374 2127 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:42:24.798445 kubelet[2127]: I0209 18:42:24.798417 2127 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:42:24.819343 kubelet[2127]: I0209 18:42:24.819324 2127 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:42:24.819343 kubelet[2127]: I0209 18:42:24.819345 2127 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:42:24.819450 kubelet[2127]: I0209 18:42:24.819360 2127 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:42:24.819450 kubelet[2127]: E0209 18:42:24.819405 2127 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:42:24.839778 kubelet[2127]: I0209 18:42:24.839752 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:42:24.839778 kubelet[2127]: I0209 18:42:24.839770 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:42:24.839907 kubelet[2127]: I0209 18:42:24.839785 2127 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:42:24.839931 kubelet[2127]: I0209 18:42:24.839912 2127 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:42:24.839931 kubelet[2127]: I0209 18:42:24.839924 2127 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:42:24.839931 kubelet[2127]: I0209 18:42:24.839930 2127 policy_none.go:49] "None policy: Start" Feb 9 18:42:24.840469 kubelet[2127]: I0209 18:42:24.840442 2127 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:42:24.840469 kubelet[2127]: I0209 18:42:24.840470 2127 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:42:24.840668 kubelet[2127]: I0209 18:42:24.840637 2127 state_mem.go:75] "Updated machine memory state" Feb 9 18:42:24.841762 kubelet[2127]: I0209 18:42:24.841744 2127 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:42:24.843531 kubelet[2127]: I0209 18:42:24.843160 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:42:24.859322 sudo[2180]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:42:24.859515 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:42:24.875336 kubelet[2127]: I0209 18:42:24.875309 2127 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:42:24.882083 kubelet[2127]: I0209 18:42:24.882056 2127 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:42:24.882148 kubelet[2127]: I0209 18:42:24.882125 2127 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:42:24.920097 kubelet[2127]: I0209 18:42:24.920072 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:24.920297 kubelet[2127]: I0209 18:42:24.920283 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:24.920418 kubelet[2127]: I0209 18:42:24.920404 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:25.073844 kubelet[2127]: I0209 18:42:25.073801 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:25.074027 kubelet[2127]: I0209 18:42:25.073858 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:25.074027 kubelet[2127]: I0209 18:42:25.073889 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de249137550db01a22d1ed135820fd0f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de249137550db01a22d1ed135820fd0f\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:25.074027 kubelet[2127]: I0209 18:42:25.073909 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de249137550db01a22d1ed135820fd0f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de249137550db01a22d1ed135820fd0f\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:25.074027 kubelet[2127]: I0209 18:42:25.073930 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:25.074027 kubelet[2127]: I0209 18:42:25.073949 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:25.074150 kubelet[2127]: I0209 18:42:25.073971 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de249137550db01a22d1ed135820fd0f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de249137550db01a22d1ed135820fd0f\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:25.074150 kubelet[2127]: I0209 18:42:25.073993 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:25.074150 kubelet[2127]: I0209 18:42:25.074021 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:42:25.226212 kubelet[2127]: E0209 18:42:25.226172 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:25.228109 kubelet[2127]: E0209 18:42:25.228076 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:25.265591 kubelet[2127]: E0209 18:42:25.265556 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:25.311768 sudo[2180]: pam_unix(sudo:session): session closed for user root Feb 9 18:42:25.760634 kubelet[2127]: I0209 18:42:25.760591 2127 apiserver.go:52] "Watching apiserver" Feb 9 18:42:25.772956 kubelet[2127]: I0209 18:42:25.772926 2127 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:42:25.779172 kubelet[2127]: I0209 18:42:25.779144 2127 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:42:25.965787 kubelet[2127]: E0209 18:42:25.965758 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 18:42:25.966216 kubelet[2127]: E0209 18:42:25.966203 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:26.365842 kubelet[2127]: E0209 18:42:26.365811 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:42:26.366434 kubelet[2127]: E0209 18:42:26.366419 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:26.566316 kubelet[2127]: E0209 18:42:26.566280 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:42:26.566760 kubelet[2127]: E0209 18:42:26.566748 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:26.634336 sudo[1337]: pam_unix(sudo:session): session closed for user root Feb 9 18:42:26.635606 sshd[1333]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:26.638024 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:59434.service: Deactivated successfully. Feb 9 18:42:26.639206 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:42:26.639803 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:42:26.640604 systemd-logind[1205]: Removed session 5. Feb 9 18:42:26.827837 kubelet[2127]: E0209 18:42:26.827804 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:26.829055 kubelet[2127]: E0209 18:42:26.828143 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:26.829366 kubelet[2127]: E0209 18:42:26.829350 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:27.166720 kubelet[2127]: I0209 18:42:27.166693 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.166645636 pod.CreationTimestamp="2024-02-09 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:26.769058831 +0000 UTC m=+2.078747758" watchObservedRunningTime="2024-02-09 18:42:27.166645636 +0000 UTC m=+2.476334563" Feb 9 18:42:27.566046 kubelet[2127]: I0209 18:42:27.566016 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.56598034 pod.CreationTimestamp="2024-02-09 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:27.166933816 +0000 UTC m=+2.476622743" watchObservedRunningTime="2024-02-09 18:42:27.56598034 +0000 UTC m=+2.875669267" Feb 9 18:42:27.566198 kubelet[2127]: I0209 18:42:27.566134 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.56611971 pod.CreationTimestamp="2024-02-09 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:27.565509401 +0000 UTC m=+2.875198328" watchObservedRunningTime="2024-02-09 18:42:27.56611971 +0000 UTC m=+2.875808637" Feb 9 18:42:27.829361 kubelet[2127]: E0209 18:42:27.829274 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:27.837399 kubelet[2127]: E0209 18:42:27.837371 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:32.212726 kubelet[2127]: E0209 18:42:32.212411 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:32.835232 kubelet[2127]: E0209 18:42:32.835206 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:36.100391 kubelet[2127]: E0209 18:42:36.097401 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:36.840816 kubelet[2127]: E0209 18:42:36.840782 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:37.847667 kubelet[2127]: E0209 18:42:37.847644 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:39.167595 kubelet[2127]: I0209 18:42:39.167568 2127 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:42:39.168296 env[1219]: time="2024-02-09T18:42:39.168259782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:42:39.168738 kubelet[2127]: I0209 18:42:39.168709 2127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:42:39.670241 kubelet[2127]: I0209 18:42:39.670205 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:39.705471 kubelet[2127]: I0209 18:42:39.705430 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:39.773415 kubelet[2127]: I0209 18:42:39.773375 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-lib-modules\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773415 kubelet[2127]: I0209 18:42:39.773423 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42bfeb82-96b9-49d7-b0cb-46dff5b90b37-xtables-lock\") pod \"kube-proxy-n2l28\" (UID: \"42bfeb82-96b9-49d7-b0cb-46dff5b90b37\") " pod="kube-system/kube-proxy-n2l28" Feb 9 18:42:39.773637 kubelet[2127]: I0209 18:42:39.773446 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p446n\" (UniqueName: \"kubernetes.io/projected/42bfeb82-96b9-49d7-b0cb-46dff5b90b37-kube-api-access-p446n\") pod \"kube-proxy-n2l28\" (UID: \"42bfeb82-96b9-49d7-b0cb-46dff5b90b37\") " pod="kube-system/kube-proxy-n2l28" Feb 9 18:42:39.773637 kubelet[2127]: I0209 18:42:39.773467 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-hostproc\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773637 kubelet[2127]: I0209 18:42:39.773488 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-cgroup\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773637 kubelet[2127]: I0209 18:42:39.773508 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-etc-cni-netd\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773637 kubelet[2127]: I0209 18:42:39.773530 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c744df9-6651-4fb4-947a-a910338090e6-cilium-config-path\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773637 kubelet[2127]: I0209 18:42:39.773576 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-hubble-tls\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773788 kubelet[2127]: I0209 18:42:39.773612 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-run\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773788 kubelet[2127]: I0209 18:42:39.773633 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-xtables-lock\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773788 kubelet[2127]: I0209 18:42:39.773653 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c744df9-6651-4fb4-947a-a910338090e6-clustermesh-secrets\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773788 kubelet[2127]: I0209 18:42:39.773675 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-net\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773788 kubelet[2127]: I0209 18:42:39.773696 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42bfeb82-96b9-49d7-b0cb-46dff5b90b37-lib-modules\") pod \"kube-proxy-n2l28\" (UID: \"42bfeb82-96b9-49d7-b0cb-46dff5b90b37\") " pod="kube-system/kube-proxy-n2l28" Feb 9 18:42:39.773788 kubelet[2127]: I0209 18:42:39.773715 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-kernel\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773933 kubelet[2127]: I0209 18:42:39.773735 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42bfeb82-96b9-49d7-b0cb-46dff5b90b37-kube-proxy\") pod \"kube-proxy-n2l28\" (UID: \"42bfeb82-96b9-49d7-b0cb-46dff5b90b37\") " pod="kube-system/kube-proxy-n2l28" Feb 9 18:42:39.773933 kubelet[2127]: I0209 18:42:39.773755 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-bpf-maps\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773933 kubelet[2127]: I0209 18:42:39.773774 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqf6z\" (UniqueName: \"kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-kube-api-access-lqf6z\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.773933 kubelet[2127]: I0209 18:42:39.773792 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cni-path\") pod \"cilium-rxwww\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " pod="kube-system/cilium-rxwww" Feb 9 18:42:39.973784 kubelet[2127]: E0209 18:42:39.973668 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:39.974367 env[1219]: time="2024-02-09T18:42:39.974304317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2l28,Uid:42bfeb82-96b9-49d7-b0cb-46dff5b90b37,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:39.995695 env[1219]: time="2024-02-09T18:42:39.995180975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:39.995695 env[1219]: time="2024-02-09T18:42:39.995222133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:39.995695 env[1219]: time="2024-02-09T18:42:39.995233253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:39.995695 env[1219]: time="2024-02-09T18:42:39.995542123Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab4169a9d4c21166f0a812a6ef90573b095e2af7106bf61823be10401d2ee680 pid=2241 runtime=io.containerd.runc.v2 Feb 9 18:42:40.007907 kubelet[2127]: E0209 18:42:40.007881 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:40.009020 env[1219]: time="2024-02-09T18:42:40.008967229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxwww,Uid:4c744df9-6651-4fb4-947a-a910338090e6,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:40.022953 env[1219]: time="2024-02-09T18:42:40.022764973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:40.022953 env[1219]: time="2024-02-09T18:42:40.022802052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:40.022953 env[1219]: time="2024-02-09T18:42:40.022812412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:40.023565 env[1219]: time="2024-02-09T18:42:40.023516990Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f pid=2267 runtime=io.containerd.runc.v2 Feb 9 18:42:40.080084 env[1219]: time="2024-02-09T18:42:40.080038846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2l28,Uid:42bfeb82-96b9-49d7-b0cb-46dff5b90b37,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab4169a9d4c21166f0a812a6ef90573b095e2af7106bf61823be10401d2ee680\"" Feb 9 18:42:40.081243 kubelet[2127]: E0209 18:42:40.080851 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:40.084420 env[1219]: time="2024-02-09T18:42:40.084028046Z" level=info msg="CreateContainer within sandbox \"ab4169a9d4c21166f0a812a6ef90573b095e2af7106bf61823be10401d2ee680\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:42:40.091938 env[1219]: time="2024-02-09T18:42:40.091761092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxwww,Uid:4c744df9-6651-4fb4-947a-a910338090e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\"" Feb 9 18:42:40.093170 kubelet[2127]: E0209 18:42:40.092536 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:40.097063 env[1219]: time="2024-02-09T18:42:40.097021374Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:42:40.099018 env[1219]: time="2024-02-09T18:42:40.098764161Z" level=info msg="CreateContainer within sandbox \"ab4169a9d4c21166f0a812a6ef90573b095e2af7106bf61823be10401d2ee680\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d738d760deb90d9acd5b1e1e0761c1a4b2c65fa4f386e5e456e397989f131f2d\"" Feb 9 18:42:40.099877 env[1219]: time="2024-02-09T18:42:40.099747491Z" level=info msg="StartContainer for \"d738d760deb90d9acd5b1e1e0761c1a4b2c65fa4f386e5e456e397989f131f2d\"" Feb 9 18:42:40.167858 kubelet[2127]: I0209 18:42:40.166583 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:40.175633 kubelet[2127]: I0209 18:42:40.175602 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmsg\" (UniqueName: \"kubernetes.io/projected/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-kube-api-access-ssmsg\") pod \"cilium-operator-f59cbd8c6-bqfsk\" (UID: \"4afe2958-5dd4-4ab6-aa85-c5496a74ea92\") " pod="kube-system/cilium-operator-f59cbd8c6-bqfsk" Feb 9 18:42:40.175764 kubelet[2127]: I0209 18:42:40.175653 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-bqfsk\" (UID: \"4afe2958-5dd4-4ab6-aa85-c5496a74ea92\") " pod="kube-system/cilium-operator-f59cbd8c6-bqfsk" Feb 9 18:42:40.195390 env[1219]: time="2024-02-09T18:42:40.195343128Z" level=info msg="StartContainer for \"d738d760deb90d9acd5b1e1e0761c1a4b2c65fa4f386e5e456e397989f131f2d\" returns successfully" Feb 9 18:42:40.770854 kubelet[2127]: E0209 18:42:40.770827 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:40.771462 env[1219]: time="2024-02-09T18:42:40.771413435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bqfsk,Uid:4afe2958-5dd4-4ab6-aa85-c5496a74ea92,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:40.783731 env[1219]: time="2024-02-09T18:42:40.783666225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:40.783846 env[1219]: time="2024-02-09T18:42:40.783707784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:40.783846 env[1219]: time="2024-02-09T18:42:40.783717904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:40.784031 env[1219]: time="2024-02-09T18:42:40.783995695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d pid=2463 runtime=io.containerd.runc.v2 Feb 9 18:42:40.834722 env[1219]: time="2024-02-09T18:42:40.834674567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bqfsk,Uid:4afe2958-5dd4-4ab6-aa85-c5496a74ea92,Namespace:kube-system,Attempt:0,} returns sandbox id \"be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d\"" Feb 9 18:42:40.835196 kubelet[2127]: E0209 18:42:40.835178 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:40.848929 kubelet[2127]: E0209 18:42:40.848905 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:41.024406 update_engine[1207]: I0209 18:42:41.024299 1207 update_attempter.cc:509] Updating boot flags... Feb 9 18:42:41.852600 kubelet[2127]: E0209 18:42:41.851428 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:43.418745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336490266.mount: Deactivated successfully. Feb 9 18:42:45.643645 env[1219]: time="2024-02-09T18:42:45.643601348Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:45.644984 env[1219]: time="2024-02-09T18:42:45.644955676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:45.646854 env[1219]: time="2024-02-09T18:42:45.646829272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:45.647451 env[1219]: time="2024-02-09T18:42:45.647410338Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:42:45.648540 env[1219]: time="2024-02-09T18:42:45.648509832Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:42:45.649757 env[1219]: time="2024-02-09T18:42:45.649662445Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:42:45.667006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274507274.mount: Deactivated successfully. Feb 9 18:42:45.669648 env[1219]: time="2024-02-09T18:42:45.669612534Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\"" Feb 9 18:42:45.671315 env[1219]: time="2024-02-09T18:42:45.671288974Z" level=info msg="StartContainer for \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\"" Feb 9 18:42:45.740014 env[1219]: time="2024-02-09T18:42:45.739963353Z" level=info msg="StartContainer for \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\" returns successfully" Feb 9 18:42:45.857574 kubelet[2127]: E0209 18:42:45.857103 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:45.892882 kubelet[2127]: I0209 18:42:45.892839 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n2l28" podStartSLOduration=6.892807583 pod.CreationTimestamp="2024-02-09 18:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:40.877462996 +0000 UTC m=+16.187151963" watchObservedRunningTime="2024-02-09 18:42:45.892807583 +0000 UTC m=+21.202496470" Feb 9 18:42:45.922045 env[1219]: time="2024-02-09T18:42:45.921912936Z" level=info msg="shim disconnected" id=784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f Feb 9 18:42:45.922045 env[1219]: time="2024-02-09T18:42:45.921991814Z" level=warning msg="cleaning up after shim disconnected" id=784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f namespace=k8s.io Feb 9 18:42:45.922045 env[1219]: time="2024-02-09T18:42:45.922003854Z" level=info msg="cleaning up dead shim" Feb 9 18:42:45.929352 env[1219]: time="2024-02-09T18:42:45.929314681Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:42:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2558 runtime=io.containerd.runc.v2\n" Feb 9 18:42:46.661490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f-rootfs.mount: Deactivated successfully. Feb 9 18:42:46.861597 kubelet[2127]: E0209 18:42:46.861569 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:46.865781 env[1219]: time="2024-02-09T18:42:46.865712175Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:42:46.889127 env[1219]: time="2024-02-09T18:42:46.889077489Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\"" Feb 9 18:42:46.889740 env[1219]: time="2024-02-09T18:42:46.889713114Z" level=info msg="StartContainer for \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\"" Feb 9 18:42:46.970395 env[1219]: time="2024-02-09T18:42:46.970286178Z" level=info msg="StartContainer for \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\" returns successfully" Feb 9 18:42:46.987458 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:42:46.987701 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:42:46.987876 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:42:46.989564 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:42:46.999811 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:42:47.043705 env[1219]: time="2024-02-09T18:42:47.043661367Z" level=info msg="shim disconnected" id=69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4 Feb 9 18:42:47.043925 env[1219]: time="2024-02-09T18:42:47.043906002Z" level=warning msg="cleaning up after shim disconnected" id=69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4 namespace=k8s.io Feb 9 18:42:47.043995 env[1219]: time="2024-02-09T18:42:47.043971200Z" level=info msg="cleaning up dead shim" Feb 9 18:42:47.052365 env[1219]: time="2024-02-09T18:42:47.052324860Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:42:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2624 runtime=io.containerd.runc.v2\n" Feb 9 18:42:47.072388 env[1219]: time="2024-02-09T18:42:47.072336349Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:47.073861 env[1219]: time="2024-02-09T18:42:47.073832797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:47.075230 env[1219]: time="2024-02-09T18:42:47.075185648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:42:47.075637 env[1219]: time="2024-02-09T18:42:47.075608639Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:42:47.079577 env[1219]: time="2024-02-09T18:42:47.079483075Z" level=info msg="CreateContainer within sandbox \"be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:42:47.086894 env[1219]: time="2024-02-09T18:42:47.086835437Z" level=info msg="CreateContainer within sandbox \"be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\"" Feb 9 18:42:47.089202 env[1219]: time="2024-02-09T18:42:47.089169187Z" level=info msg="StartContainer for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\"" Feb 9 18:42:47.148154 env[1219]: time="2024-02-09T18:42:47.145407176Z" level=info msg="StartContainer for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" returns successfully" Feb 9 18:42:47.662004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4-rootfs.mount: Deactivated successfully. Feb 9 18:42:47.863953 kubelet[2127]: E0209 18:42:47.863929 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:47.865299 kubelet[2127]: E0209 18:42:47.865271 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:47.866806 env[1219]: time="2024-02-09T18:42:47.866756522Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:42:47.877858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894688770.mount: Deactivated successfully. Feb 9 18:42:47.884416 env[1219]: time="2024-02-09T18:42:47.884358023Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\"" Feb 9 18:42:47.884862 env[1219]: time="2024-02-09T18:42:47.884834332Z" level=info msg="StartContainer for \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\"" Feb 9 18:42:47.920784 kubelet[2127]: I0209 18:42:47.920544 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-bqfsk" podStartSLOduration=-9.223372028934269e+09 pod.CreationTimestamp="2024-02-09 18:42:40 +0000 UTC" firstStartedPulling="2024-02-09 18:42:40.836794383 +0000 UTC m=+16.146483310" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:47.890266095 +0000 UTC m=+23.199954982" watchObservedRunningTime="2024-02-09 18:42:47.920507124 +0000 UTC m=+23.230196051" Feb 9 18:42:47.979111 env[1219]: time="2024-02-09T18:42:47.976608796Z" level=info msg="StartContainer for \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\" returns successfully" Feb 9 18:42:48.019073 env[1219]: time="2024-02-09T18:42:48.019011340Z" level=info msg="shim disconnected" id=8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647 Feb 9 18:42:48.019073 env[1219]: time="2024-02-09T18:42:48.019065939Z" level=warning msg="cleaning up after shim disconnected" id=8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647 namespace=k8s.io Feb 9 18:42:48.019073 env[1219]: time="2024-02-09T18:42:48.019076658Z" level=info msg="cleaning up dead shim" Feb 9 18:42:48.026794 env[1219]: time="2024-02-09T18:42:48.026740501Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:42:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2722 runtime=io.containerd.runc.v2\n" Feb 9 18:42:48.664770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647-rootfs.mount: Deactivated successfully. Feb 9 18:42:48.870820 kubelet[2127]: E0209 18:42:48.869724 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:48.870820 kubelet[2127]: E0209 18:42:48.870442 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:48.879098 env[1219]: time="2024-02-09T18:42:48.877667939Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:42:48.903081 env[1219]: time="2024-02-09T18:42:48.902095156Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\"" Feb 9 18:42:48.903417 env[1219]: time="2024-02-09T18:42:48.903390770Z" level=info msg="StartContainer for \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\"" Feb 9 18:42:48.981585 env[1219]: time="2024-02-09T18:42:48.981490961Z" level=info msg="StartContainer for \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\" returns successfully" Feb 9 18:42:48.999394 env[1219]: time="2024-02-09T18:42:48.999348594Z" level=info msg="shim disconnected" id=f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0 Feb 9 18:42:48.999626 env[1219]: time="2024-02-09T18:42:48.999607748Z" level=warning msg="cleaning up after shim disconnected" id=f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0 namespace=k8s.io Feb 9 18:42:48.999703 env[1219]: time="2024-02-09T18:42:48.999690187Z" level=info msg="cleaning up dead shim" Feb 9 18:42:49.006393 env[1219]: time="2024-02-09T18:42:49.006357734Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:42:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2777 runtime=io.containerd.runc.v2\n" Feb 9 18:42:49.661463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0-rootfs.mount: Deactivated successfully. Feb 9 18:42:49.873057 kubelet[2127]: E0209 18:42:49.873028 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:49.875260 env[1219]: time="2024-02-09T18:42:49.875206372Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:42:49.885989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467935087.mount: Deactivated successfully. Feb 9 18:42:49.889278 env[1219]: time="2024-02-09T18:42:49.889224176Z" level=info msg="CreateContainer within sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\"" Feb 9 18:42:49.890165 env[1219]: time="2024-02-09T18:42:49.890134478Z" level=info msg="StartContainer for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\"" Feb 9 18:42:49.944525 env[1219]: time="2024-02-09T18:42:49.944413849Z" level=info msg="StartContainer for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" returns successfully" Feb 9 18:42:50.098834 kubelet[2127]: I0209 18:42:50.098805 2127 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:42:50.113523 kubelet[2127]: I0209 18:42:50.113483 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:50.117223 kubelet[2127]: I0209 18:42:50.117194 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:42:50.147429 kubelet[2127]: I0209 18:42:50.147401 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66578035-61ab-45a1-a52f-f1499f3c5cb0-config-volume\") pod \"coredns-787d4945fb-kr6n9\" (UID: \"66578035-61ab-45a1-a52f-f1499f3c5cb0\") " pod="kube-system/coredns-787d4945fb-kr6n9" Feb 9 18:42:50.147672 kubelet[2127]: I0209 18:42:50.147638 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9eb6d98-7422-4fa6-9f57-e56a75643c47-config-volume\") pod \"coredns-787d4945fb-p4cg7\" (UID: \"e9eb6d98-7422-4fa6-9f57-e56a75643c47\") " pod="kube-system/coredns-787d4945fb-p4cg7" Feb 9 18:42:50.147740 kubelet[2127]: I0209 18:42:50.147714 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkjp\" (UniqueName: \"kubernetes.io/projected/66578035-61ab-45a1-a52f-f1499f3c5cb0-kube-api-access-vvkjp\") pod \"coredns-787d4945fb-kr6n9\" (UID: \"66578035-61ab-45a1-a52f-f1499f3c5cb0\") " pod="kube-system/coredns-787d4945fb-kr6n9" Feb 9 18:42:50.147788 kubelet[2127]: I0209 18:42:50.147754 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nm6l\" (UniqueName: \"kubernetes.io/projected/e9eb6d98-7422-4fa6-9f57-e56a75643c47-kube-api-access-4nm6l\") pod \"coredns-787d4945fb-p4cg7\" (UID: \"e9eb6d98-7422-4fa6-9f57-e56a75643c47\") " pod="kube-system/coredns-787d4945fb-p4cg7" Feb 9 18:42:50.209279 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:42:50.406275 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:42:50.416958 kubelet[2127]: E0209 18:42:50.416936 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:50.417660 env[1219]: time="2024-02-09T18:42:50.417616670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kr6n9,Uid:66578035-61ab-45a1-a52f-f1499f3c5cb0,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:50.428713 kubelet[2127]: E0209 18:42:50.428692 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:50.429477 env[1219]: time="2024-02-09T18:42:50.429428527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-p4cg7,Uid:e9eb6d98-7422-4fa6-9f57-e56a75643c47,Namespace:kube-system,Attempt:0,}" Feb 9 18:42:50.877056 kubelet[2127]: E0209 18:42:50.877032 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:50.891057 kubelet[2127]: I0209 18:42:50.891022 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rxwww" podStartSLOduration=-9.223372024963793e+09 pod.CreationTimestamp="2024-02-09 18:42:39 +0000 UTC" firstStartedPulling="2024-02-09 18:42:40.096231677 +0000 UTC m=+15.405920604" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:50.890365546 +0000 UTC m=+26.200054473" watchObservedRunningTime="2024-02-09 18:42:50.890983174 +0000 UTC m=+26.200672101" Feb 9 18:42:51.878500 kubelet[2127]: E0209 18:42:51.878475 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:52.031403 systemd-networkd[1101]: cilium_host: Link UP Feb 9 18:42:52.031560 systemd-networkd[1101]: cilium_net: Link UP Feb 9 18:42:52.034141 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:42:52.034210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:42:52.032854 systemd-networkd[1101]: cilium_net: Gained carrier Feb 9 18:42:52.033722 systemd-networkd[1101]: cilium_host: Gained carrier Feb 9 18:42:52.033817 systemd-networkd[1101]: cilium_net: Gained IPv6LL Feb 9 18:42:52.033961 systemd-networkd[1101]: cilium_host: Gained IPv6LL Feb 9 18:42:52.117236 systemd-networkd[1101]: cilium_vxlan: Link UP Feb 9 18:42:52.117243 systemd-networkd[1101]: cilium_vxlan: Gained carrier Feb 9 18:42:52.419283 kernel: NET: Registered PF_ALG protocol family Feb 9 18:42:52.880447 kubelet[2127]: E0209 18:42:52.880418 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:52.986920 systemd-networkd[1101]: lxc_health: Link UP Feb 9 18:42:52.995627 systemd-networkd[1101]: lxc_health: Gained carrier Feb 9 18:42:52.996341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:42:53.477911 systemd-networkd[1101]: lxc479a44a1edc1: Link UP Feb 9 18:42:53.499290 kernel: eth0: renamed from tmp18c28 Feb 9 18:42:53.505875 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:42:53.505958 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc479a44a1edc1: link becomes ready Feb 9 18:42:53.506719 systemd-networkd[1101]: lxc479a44a1edc1: Gained carrier Feb 9 18:42:53.506862 systemd-networkd[1101]: lxc7b35d5af829b: Link UP Feb 9 18:42:53.514274 kernel: eth0: renamed from tmp38b76 Feb 9 18:42:53.524304 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7b35d5af829b: link becomes ready Feb 9 18:42:53.524106 systemd-networkd[1101]: lxc7b35d5af829b: Gained carrier Feb 9 18:42:53.561990 systemd-networkd[1101]: cilium_vxlan: Gained IPv6LL Feb 9 18:42:54.010440 kubelet[2127]: E0209 18:42:54.010399 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:54.265427 systemd-networkd[1101]: lxc_health: Gained IPv6LL Feb 9 18:42:54.714382 systemd-networkd[1101]: lxc7b35d5af829b: Gained IPv6LL Feb 9 18:42:54.882489 kubelet[2127]: E0209 18:42:54.882448 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:55.162379 systemd-networkd[1101]: lxc479a44a1edc1: Gained IPv6LL Feb 9 18:42:55.883864 kubelet[2127]: E0209 18:42:55.883831 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:57.028174 env[1219]: time="2024-02-09T18:42:57.027426795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:57.028174 env[1219]: time="2024-02-09T18:42:57.027467714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:57.028174 env[1219]: time="2024-02-09T18:42:57.027480474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:57.028174 env[1219]: time="2024-02-09T18:42:57.027656752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18c280c91987f7f3f6fc2cad0fd3c96091ac3e5c2d0f55b36c2d4419cf09f8e8 pid=3346 runtime=io.containerd.runc.v2 Feb 9 18:42:57.028929 env[1219]: time="2024-02-09T18:42:57.028779616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:42:57.028929 env[1219]: time="2024-02-09T18:42:57.028810335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:42:57.028929 env[1219]: time="2024-02-09T18:42:57.028820175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:42:57.029031 env[1219]: time="2024-02-09T18:42:57.028994893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38b76671d05e138c7a0261e01f6864301288deb43f9644ecb092755ec929e4ac pid=3355 runtime=io.containerd.runc.v2 Feb 9 18:42:57.041005 systemd[1]: run-containerd-runc-k8s.io-18c280c91987f7f3f6fc2cad0fd3c96091ac3e5c2d0f55b36c2d4419cf09f8e8-runc.rlXPfT.mount: Deactivated successfully. Feb 9 18:42:57.098388 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:42:57.098735 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:42:57.118078 env[1219]: time="2024-02-09T18:42:57.118041814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kr6n9,Uid:66578035-61ab-45a1-a52f-f1499f3c5cb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"38b76671d05e138c7a0261e01f6864301288deb43f9644ecb092755ec929e4ac\"" Feb 9 18:42:57.118300 env[1219]: time="2024-02-09T18:42:57.118154373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-p4cg7,Uid:e9eb6d98-7422-4fa6-9f57-e56a75643c47,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c280c91987f7f3f6fc2cad0fd3c96091ac3e5c2d0f55b36c2d4419cf09f8e8\"" Feb 9 18:42:57.119573 kubelet[2127]: E0209 18:42:57.119552 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:57.119831 kubelet[2127]: E0209 18:42:57.119735 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:57.123259 env[1219]: time="2024-02-09T18:42:57.123218020Z" level=info msg="CreateContainer within sandbox \"38b76671d05e138c7a0261e01f6864301288deb43f9644ecb092755ec929e4ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:42:57.125560 env[1219]: time="2024-02-09T18:42:57.125528987Z" level=info msg="CreateContainer within sandbox \"18c280c91987f7f3f6fc2cad0fd3c96091ac3e5c2d0f55b36c2d4419cf09f8e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:42:57.137632 env[1219]: time="2024-02-09T18:42:57.137589334Z" level=info msg="CreateContainer within sandbox \"18c280c91987f7f3f6fc2cad0fd3c96091ac3e5c2d0f55b36c2d4419cf09f8e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ce861087c670754fa0c6b64f03fc3a778908cfad50479d6a7035291236aef84\"" Feb 9 18:42:57.138024 env[1219]: time="2024-02-09T18:42:57.137998048Z" level=info msg="StartContainer for \"3ce861087c670754fa0c6b64f03fc3a778908cfad50479d6a7035291236aef84\"" Feb 9 18:42:57.138742 env[1219]: time="2024-02-09T18:42:57.138703638Z" level=info msg="CreateContainer within sandbox \"38b76671d05e138c7a0261e01f6864301288deb43f9644ecb092755ec929e4ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dce8b2cbdd509d8eb1b7f76328a60553d3758f7851fee4bb39498c2b3cbe1af7\"" Feb 9 18:42:57.139079 env[1219]: time="2024-02-09T18:42:57.139047273Z" level=info msg="StartContainer for \"dce8b2cbdd509d8eb1b7f76328a60553d3758f7851fee4bb39498c2b3cbe1af7\"" Feb 9 18:42:57.204959 env[1219]: time="2024-02-09T18:42:57.204921567Z" level=info msg="StartContainer for \"dce8b2cbdd509d8eb1b7f76328a60553d3758f7851fee4bb39498c2b3cbe1af7\" returns successfully" Feb 9 18:42:57.220506 env[1219]: time="2024-02-09T18:42:57.220405185Z" level=info msg="StartContainer for \"3ce861087c670754fa0c6b64f03fc3a778908cfad50479d6a7035291236aef84\" returns successfully" Feb 9 18:42:57.891401 kubelet[2127]: E0209 18:42:57.891264 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:57.897145 kubelet[2127]: E0209 18:42:57.897117 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:57.914291 kubelet[2127]: I0209 18:42:57.911053 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-p4cg7" podStartSLOduration=17.911016552 pod.CreationTimestamp="2024-02-09 18:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:57.901166934 +0000 UTC m=+33.210855861" watchObservedRunningTime="2024-02-09 18:42:57.911016552 +0000 UTC m=+33.220705479" Feb 9 18:42:57.924013 kubelet[2127]: I0209 18:42:57.923431 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-kr6n9" podStartSLOduration=17.923347095 pod.CreationTimestamp="2024-02-09 18:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:42:57.920528336 +0000 UTC m=+33.230217303" watchObservedRunningTime="2024-02-09 18:42:57.923347095 +0000 UTC m=+33.233036022" Feb 9 18:42:58.898756 kubelet[2127]: E0209 18:42:58.898715 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:58.899083 kubelet[2127]: E0209 18:42:58.898818 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:59.900317 kubelet[2127]: E0209 18:42:59.900294 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:59.900672 kubelet[2127]: E0209 18:42:59.900324 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:02.666566 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:56136.service. Feb 9 18:43:02.714448 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:02.716088 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:02.722294 systemd-logind[1205]: New session 6 of user core. Feb 9 18:43:02.722325 systemd[1]: Started session-6.scope. Feb 9 18:43:02.880323 sshd[3547]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:02.882719 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:56136.service: Deactivated successfully. Feb 9 18:43:02.884129 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:43:02.884700 systemd-logind[1205]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:43:02.886367 systemd-logind[1205]: Removed session 6. Feb 9 18:43:07.883761 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:56152.service. Feb 9 18:43:07.923421 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 56152 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:07.924629 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:07.928384 systemd-logind[1205]: New session 7 of user core. Feb 9 18:43:07.929163 systemd[1]: Started session-7.scope. Feb 9 18:43:08.038582 sshd[3564]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:08.040897 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:56152.service: Deactivated successfully. Feb 9 18:43:08.041855 systemd-logind[1205]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:43:08.041915 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:43:08.042609 systemd-logind[1205]: Removed session 7. Feb 9 18:43:13.041931 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:53626.service. Feb 9 18:43:13.083111 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 53626 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:13.084662 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:13.088316 systemd-logind[1205]: New session 8 of user core. Feb 9 18:43:13.088836 systemd[1]: Started session-8.scope. Feb 9 18:43:13.194955 sshd[3582]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:13.197829 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:53626.service: Deactivated successfully. Feb 9 18:43:13.198769 systemd-logind[1205]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:43:13.198811 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:43:13.199620 systemd-logind[1205]: Removed session 8. Feb 9 18:43:18.197923 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:53638.service. Feb 9 18:43:18.237077 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 53638 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:18.238633 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:18.242282 systemd-logind[1205]: New session 9 of user core. Feb 9 18:43:18.242761 systemd[1]: Started session-9.scope. Feb 9 18:43:18.347951 sshd[3597]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:18.350319 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:53640.service. Feb 9 18:43:18.351189 systemd-logind[1205]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:43:18.351424 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:53638.service: Deactivated successfully. Feb 9 18:43:18.352222 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:43:18.352670 systemd-logind[1205]: Removed session 9. Feb 9 18:43:18.389154 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 53640 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:18.390245 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:18.393427 systemd-logind[1205]: New session 10 of user core. Feb 9 18:43:18.394295 systemd[1]: Started session-10.scope. Feb 9 18:43:19.230224 sshd[3610]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:19.235155 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:53656.service. Feb 9 18:43:19.238930 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:53640.service: Deactivated successfully. Feb 9 18:43:19.242396 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:43:19.243307 systemd-logind[1205]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:43:19.247984 systemd-logind[1205]: Removed session 10. Feb 9 18:43:19.284991 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 53656 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:19.286279 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:19.290451 systemd-logind[1205]: New session 11 of user core. Feb 9 18:43:19.290724 systemd[1]: Started session-11.scope. Feb 9 18:43:19.402452 sshd[3622]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:19.405018 systemd-logind[1205]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:43:19.405222 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:53656.service: Deactivated successfully. Feb 9 18:43:19.406033 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:43:19.406468 systemd-logind[1205]: Removed session 11. Feb 9 18:43:24.405960 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:40634.service. Feb 9 18:43:24.444964 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 40634 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:24.446176 sshd[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:24.449541 systemd-logind[1205]: New session 12 of user core. Feb 9 18:43:24.450424 systemd[1]: Started session-12.scope. Feb 9 18:43:24.559483 sshd[3638]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:24.561866 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:40634.service: Deactivated successfully. Feb 9 18:43:24.562857 systemd-logind[1205]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:43:24.562926 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:43:24.563672 systemd-logind[1205]: Removed session 12. Feb 9 18:43:29.564698 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:40638.service. Feb 9 18:43:29.604678 sshd[3654]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:29.606016 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:29.611581 systemd-logind[1205]: New session 13 of user core. Feb 9 18:43:29.611923 systemd[1]: Started session-13.scope. Feb 9 18:43:29.732513 sshd[3654]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:29.734989 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:40654.service. Feb 9 18:43:29.735561 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:40638.service: Deactivated successfully. Feb 9 18:43:29.736507 systemd-logind[1205]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:43:29.736552 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:43:29.737314 systemd-logind[1205]: Removed session 13. Feb 9 18:43:29.776402 sshd[3666]: Accepted publickey for core from 10.0.0.1 port 40654 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:29.778646 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:29.783692 systemd-logind[1205]: New session 14 of user core. Feb 9 18:43:29.784590 systemd[1]: Started session-14.scope. Feb 9 18:43:29.987646 sshd[3666]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:29.988871 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:40656.service. Feb 9 18:43:29.992334 systemd-logind[1205]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:43:29.992523 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:40654.service: Deactivated successfully. Feb 9 18:43:29.993347 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:43:29.993800 systemd-logind[1205]: Removed session 14. Feb 9 18:43:30.030546 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:30.031574 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:30.037594 systemd[1]: Started session-15.scope. Feb 9 18:43:30.037802 systemd-logind[1205]: New session 15 of user core. Feb 9 18:43:30.807295 sshd[3679]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:30.809497 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:40672.service. Feb 9 18:43:30.810266 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:40656.service: Deactivated successfully. Feb 9 18:43:30.812504 systemd-logind[1205]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:43:30.812555 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:43:30.813353 systemd-logind[1205]: Removed session 15. Feb 9 18:43:30.854043 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 40672 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:30.855237 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:30.858584 systemd-logind[1205]: New session 16 of user core. Feb 9 18:43:30.859437 systemd[1]: Started session-16.scope. Feb 9 18:43:31.070801 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:40684.service. Feb 9 18:43:31.069090 sshd[3705]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:31.074239 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:40672.service: Deactivated successfully. Feb 9 18:43:31.075523 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:43:31.075991 systemd-logind[1205]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:43:31.077388 systemd-logind[1205]: Removed session 16. Feb 9 18:43:31.115472 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 40684 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:31.116607 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:31.120708 systemd-logind[1205]: New session 17 of user core. Feb 9 18:43:31.121244 systemd[1]: Started session-17.scope. Feb 9 18:43:31.229576 sshd[3760]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:31.233502 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:40684.service: Deactivated successfully. Feb 9 18:43:31.234445 systemd-logind[1205]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:43:31.234492 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:43:31.235198 systemd-logind[1205]: Removed session 17. Feb 9 18:43:36.233191 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:45384.service. Feb 9 18:43:36.272691 sshd[3803]: Accepted publickey for core from 10.0.0.1 port 45384 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:36.273992 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:36.277399 systemd-logind[1205]: New session 18 of user core. Feb 9 18:43:36.278244 systemd[1]: Started session-18.scope. Feb 9 18:43:36.383216 sshd[3803]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:36.385890 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:45384.service: Deactivated successfully. Feb 9 18:43:36.386841 systemd-logind[1205]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:43:36.386892 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:43:36.387601 systemd-logind[1205]: Removed session 18. Feb 9 18:43:41.386425 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:45396.service. Feb 9 18:43:41.425731 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:41.427397 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:41.430679 systemd-logind[1205]: New session 19 of user core. Feb 9 18:43:41.431538 systemd[1]: Started session-19.scope. Feb 9 18:43:41.537450 sshd[3819]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:41.539755 systemd-logind[1205]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:43:41.539955 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:45396.service: Deactivated successfully. Feb 9 18:43:41.540783 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:43:41.541190 systemd-logind[1205]: Removed session 19. Feb 9 18:43:44.821106 kubelet[2127]: E0209 18:43:44.821069 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:45.820894 kubelet[2127]: E0209 18:43:45.820857 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:46.540374 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:53492.service. Feb 9 18:43:46.579462 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 53492 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:46.581106 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:46.584796 systemd-logind[1205]: New session 20 of user core. Feb 9 18:43:46.585125 systemd[1]: Started session-20.scope. Feb 9 18:43:46.686472 sshd[3833]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:46.688822 systemd-logind[1205]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:43:46.688970 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:53492.service: Deactivated successfully. Feb 9 18:43:46.689812 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:43:46.690343 systemd-logind[1205]: Removed session 20. Feb 9 18:43:51.689898 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:53500.service. Feb 9 18:43:51.728940 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 53500 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:51.730153 sshd[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:51.733917 systemd-logind[1205]: New session 21 of user core. Feb 9 18:43:51.734405 systemd[1]: Started session-21.scope. Feb 9 18:43:51.837620 sshd[3847]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:51.840030 systemd[1]: Started sshd@21-10.0.0.121:22-10.0.0.1:53504.service. Feb 9 18:43:51.840483 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:53500.service: Deactivated successfully. Feb 9 18:43:51.841518 systemd-logind[1205]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:43:51.841562 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:43:51.842566 systemd-logind[1205]: Removed session 21. Feb 9 18:43:51.879661 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:51.880875 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:51.884055 systemd-logind[1205]: New session 22 of user core. Feb 9 18:43:51.884813 systemd[1]: Started session-22.scope. Feb 9 18:43:53.416052 env[1219]: time="2024-02-09T18:43:53.416002838Z" level=info msg="StopContainer for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" with timeout 30 (s)" Feb 9 18:43:53.418243 env[1219]: time="2024-02-09T18:43:53.418204019Z" level=info msg="Stop container \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" with signal terminated" Feb 9 18:43:53.428976 systemd[1]: run-containerd-runc-k8s.io-e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809-runc.wtns9L.mount: Deactivated successfully. Feb 9 18:43:53.448326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f-rootfs.mount: Deactivated successfully. Feb 9 18:43:53.458017 env[1219]: time="2024-02-09T18:43:53.457945001Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:43:53.461621 env[1219]: time="2024-02-09T18:43:53.461570796Z" level=info msg="shim disconnected" id=b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f Feb 9 18:43:53.461621 env[1219]: time="2024-02-09T18:43:53.461616596Z" level=warning msg="cleaning up after shim disconnected" id=b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f namespace=k8s.io Feb 9 18:43:53.461621 env[1219]: time="2024-02-09T18:43:53.461626597Z" level=info msg="cleaning up dead shim" Feb 9 18:43:53.463416 env[1219]: time="2024-02-09T18:43:53.463380853Z" level=info msg="StopContainer for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" with timeout 1 (s)" Feb 9 18:43:53.463826 env[1219]: time="2024-02-09T18:43:53.463798337Z" level=info msg="Stop container \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" with signal terminated" Feb 9 18:43:53.469630 env[1219]: time="2024-02-09T18:43:53.469585713Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3910 runtime=io.containerd.runc.v2\n" Feb 9 18:43:53.471649 systemd-networkd[1101]: lxc_health: Link DOWN Feb 9 18:43:53.471675 systemd-networkd[1101]: lxc_health: Lost carrier Feb 9 18:43:53.472277 env[1219]: time="2024-02-09T18:43:53.472223658Z" level=info msg="StopContainer for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" returns successfully" Feb 9 18:43:53.473073 env[1219]: time="2024-02-09T18:43:53.473030706Z" level=info msg="StopPodSandbox for \"be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d\"" Feb 9 18:43:53.473166 env[1219]: time="2024-02-09T18:43:53.473105867Z" level=info msg="Container to stop \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:53.476141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d-shm.mount: Deactivated successfully. Feb 9 18:43:53.505121 env[1219]: time="2024-02-09T18:43:53.505070494Z" level=info msg="shim disconnected" id=be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d Feb 9 18:43:53.505121 env[1219]: time="2024-02-09T18:43:53.505116334Z" level=warning msg="cleaning up after shim disconnected" id=be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d namespace=k8s.io Feb 9 18:43:53.505121 env[1219]: time="2024-02-09T18:43:53.505125814Z" level=info msg="cleaning up dead shim" Feb 9 18:43:53.512853 env[1219]: time="2024-02-09T18:43:53.512810688Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3952 runtime=io.containerd.runc.v2\n" Feb 9 18:43:53.513325 env[1219]: time="2024-02-09T18:43:53.513293853Z" level=info msg="TearDown network for sandbox \"be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d\" successfully" Feb 9 18:43:53.513430 env[1219]: time="2024-02-09T18:43:53.513410814Z" level=info msg="StopPodSandbox for \"be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d\" returns successfully" Feb 9 18:43:53.524444 env[1219]: time="2024-02-09T18:43:53.524404360Z" level=info msg="shim disconnected" id=e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809 Feb 9 18:43:53.524444 env[1219]: time="2024-02-09T18:43:53.524443880Z" level=warning msg="cleaning up after shim disconnected" id=e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809 namespace=k8s.io Feb 9 18:43:53.524633 env[1219]: time="2024-02-09T18:43:53.524453040Z" level=info msg="cleaning up dead shim" Feb 9 18:43:53.532051 env[1219]: time="2024-02-09T18:43:53.532017633Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Feb 9 18:43:53.533817 env[1219]: time="2024-02-09T18:43:53.533775010Z" level=info msg="StopContainer for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" returns successfully" Feb 9 18:43:53.534224 env[1219]: time="2024-02-09T18:43:53.534195494Z" level=info msg="StopPodSandbox for \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\"" Feb 9 18:43:53.534392 env[1219]: time="2024-02-09T18:43:53.534367975Z" level=info msg="Container to stop \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:53.534468 env[1219]: time="2024-02-09T18:43:53.534451736Z" level=info msg="Container to stop \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:53.534530 env[1219]: time="2024-02-09T18:43:53.534514137Z" level=info msg="Container to stop \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:53.534605 env[1219]: time="2024-02-09T18:43:53.534577417Z" level=info msg="Container to stop \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:53.534677 env[1219]: time="2024-02-09T18:43:53.534660978Z" level=info msg="Container to stop \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:53.555299 env[1219]: time="2024-02-09T18:43:53.555237256Z" level=info msg="shim disconnected" id=1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f Feb 9 18:43:53.555299 env[1219]: time="2024-02-09T18:43:53.555296176Z" level=warning msg="cleaning up after shim disconnected" id=1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f namespace=k8s.io Feb 9 18:43:53.555299 env[1219]: time="2024-02-09T18:43:53.555305776Z" level=info msg="cleaning up dead shim" Feb 9 18:43:53.562585 env[1219]: time="2024-02-09T18:43:53.562429165Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" Feb 9 18:43:53.562941 env[1219]: time="2024-02-09T18:43:53.562734288Z" level=info msg="TearDown network for sandbox \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" successfully" Feb 9 18:43:53.562941 env[1219]: time="2024-02-09T18:43:53.562757448Z" level=info msg="StopPodSandbox for \"1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f\" returns successfully" Feb 9 18:43:53.664034 kubelet[2127]: I0209 18:43:53.663984 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssmsg\" (UniqueName: \"kubernetes.io/projected/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-kube-api-access-ssmsg\") pod \"4afe2958-5dd4-4ab6-aa85-c5496a74ea92\" (UID: \"4afe2958-5dd4-4ab6-aa85-c5496a74ea92\") " Feb 9 18:43:53.664034 kubelet[2127]: I0209 18:43:53.664032 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-bpf-maps\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.664989 kubelet[2127]: I0209 18:43:53.664052 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-etc-cni-netd\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.664989 kubelet[2127]: I0209 18:43:53.664087 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c744df9-6651-4fb4-947a-a910338090e6-cilium-config-path\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.664989 kubelet[2127]: I0209 18:43:53.664106 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-hubble-tls\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.664989 kubelet[2127]: I0209 18:43:53.664122 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-xtables-lock\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.664989 kubelet[2127]: I0209 18:43:53.664142 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqf6z\" (UniqueName: \"kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-kube-api-access-lqf6z\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.664989 kubelet[2127]: I0209 18:43:53.664161 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-cilium-config-path\") pod \"4afe2958-5dd4-4ab6-aa85-c5496a74ea92\" (UID: \"4afe2958-5dd4-4ab6-aa85-c5496a74ea92\") " Feb 9 18:43:53.665307 kubelet[2127]: I0209 18:43:53.664179 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cni-path\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.665307 kubelet[2127]: I0209 18:43:53.664199 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-cgroup\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.665307 kubelet[2127]: I0209 18:43:53.664636 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.665307 kubelet[2127]: I0209 18:43:53.664974 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.665307 kubelet[2127]: I0209 18:43:53.665010 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.665513 kubelet[2127]: I0209 18:43:53.665098 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.665513 kubelet[2127]: I0209 18:43:53.665243 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.665513 kubelet[2127]: W0209 18:43:53.665097 2127 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4afe2958-5dd4-4ab6-aa85-c5496a74ea92/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:43:53.665513 kubelet[2127]: W0209 18:43:53.665283 2127 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4c744df9-6651-4fb4-947a-a910338090e6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:43:53.668479 kubelet[2127]: I0209 18:43:53.667410 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c744df9-6651-4fb4-947a-a910338090e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:43:53.668479 kubelet[2127]: I0209 18:43:53.667451 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4afe2958-5dd4-4ab6-aa85-c5496a74ea92" (UID: "4afe2958-5dd4-4ab6-aa85-c5496a74ea92"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:43:53.668708 kubelet[2127]: I0209 18:43:53.668656 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-kube-api-access-lqf6z" (OuterVolumeSpecName: "kube-api-access-lqf6z") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "kube-api-access-lqf6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:43:53.669212 kubelet[2127]: I0209 18:43:53.669186 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-kube-api-access-ssmsg" (OuterVolumeSpecName: "kube-api-access-ssmsg") pod "4afe2958-5dd4-4ab6-aa85-c5496a74ea92" (UID: "4afe2958-5dd4-4ab6-aa85-c5496a74ea92"). InnerVolumeSpecName "kube-api-access-ssmsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:43:53.670045 kubelet[2127]: I0209 18:43:53.670018 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:43:53.765138 kubelet[2127]: I0209 18:43:53.764957 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-hostproc\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.765138 kubelet[2127]: I0209 18:43:53.764990 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-run\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.765138 kubelet[2127]: I0209 18:43:53.765010 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-kernel\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.765138 kubelet[2127]: I0209 18:43:53.765031 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-lib-modules\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.765138 kubelet[2127]: I0209 18:43:53.765055 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c744df9-6651-4fb4-947a-a910338090e6-clustermesh-secrets\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.765138 kubelet[2127]: I0209 18:43:53.765073 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-net\") pod \"4c744df9-6651-4fb4-947a-a910338090e6\" (UID: \"4c744df9-6651-4fb4-947a-a910338090e6\") " Feb 9 18:43:53.765385 kubelet[2127]: I0209 18:43:53.765098 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.765385 kubelet[2127]: I0209 18:43:53.765128 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.765385 kubelet[2127]: I0209 18:43:53.765143 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.765385 kubelet[2127]: I0209 18:43:53.765144 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.765385 kubelet[2127]: I0209 18:43:53.765158 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765204 2127 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765236 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765269 2127 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ssmsg\" (UniqueName: \"kubernetes.io/projected/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-kube-api-access-ssmsg\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765281 2127 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765290 2127 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765299 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c744df9-6651-4fb4-947a-a910338090e6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765308 2127 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765503 kubelet[2127]: I0209 18:43:53.765317 2127 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765683 kubelet[2127]: I0209 18:43:53.765326 2127 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-lqf6z\" (UniqueName: \"kubernetes.io/projected/4c744df9-6651-4fb4-947a-a910338090e6-kube-api-access-lqf6z\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.765683 kubelet[2127]: I0209 18:43:53.765346 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4afe2958-5dd4-4ab6-aa85-c5496a74ea92-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.768656 kubelet[2127]: I0209 18:43:53.768612 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c744df9-6651-4fb4-947a-a910338090e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c744df9-6651-4fb4-947a-a910338090e6" (UID: "4c744df9-6651-4fb4-947a-a910338090e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:43:53.866142 kubelet[2127]: I0209 18:43:53.866093 2127 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.866142 kubelet[2127]: I0209 18:43:53.866143 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.866339 kubelet[2127]: I0209 18:43:53.866166 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.866339 kubelet[2127]: I0209 18:43:53.866182 2127 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.866339 kubelet[2127]: I0209 18:43:53.866201 2127 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c744df9-6651-4fb4-947a-a910338090e6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.866339 kubelet[2127]: I0209 18:43:53.866219 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c744df9-6651-4fb4-947a-a910338090e6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:53.992501 kubelet[2127]: I0209 18:43:53.991678 2127 scope.go:115] "RemoveContainer" containerID="b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f" Feb 9 18:43:53.995864 env[1219]: time="2024-02-09T18:43:53.995821567Z" level=info msg="RemoveContainer for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\"" Feb 9 18:43:53.999205 env[1219]: time="2024-02-09T18:43:53.999172640Z" level=info msg="RemoveContainer for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" returns successfully" Feb 9 18:43:53.999506 kubelet[2127]: I0209 18:43:53.999486 2127 scope.go:115] "RemoveContainer" containerID="b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f" Feb 9 18:43:54.000006 env[1219]: time="2024-02-09T18:43:53.999931167Z" level=error msg="ContainerStatus for \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\": not found" Feb 9 18:43:54.001670 kubelet[2127]: E0209 18:43:54.001637 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\": not found" containerID="b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f" Feb 9 18:43:54.001747 kubelet[2127]: I0209 18:43:54.001683 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f} err="failed to get container status \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9ea565599063f3f79af5945c086e57f922b66ba36650106dc658b621830195f\": not found" Feb 9 18:43:54.001747 kubelet[2127]: I0209 18:43:54.001697 2127 scope.go:115] "RemoveContainer" containerID="e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809" Feb 9 18:43:54.002722 env[1219]: time="2024-02-09T18:43:54.002692472Z" level=info msg="RemoveContainer for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\"" Feb 9 18:43:54.011451 env[1219]: time="2024-02-09T18:43:54.011370672Z" level=info msg="RemoveContainer for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" returns successfully" Feb 9 18:43:54.011869 kubelet[2127]: I0209 18:43:54.011849 2127 scope.go:115] "RemoveContainer" containerID="f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0" Feb 9 18:43:54.013704 env[1219]: time="2024-02-09T18:43:54.013574133Z" level=info msg="RemoveContainer for \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\"" Feb 9 18:43:54.017658 env[1219]: time="2024-02-09T18:43:54.017603730Z" level=info msg="RemoveContainer for \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\" returns successfully" Feb 9 18:43:54.019394 kubelet[2127]: I0209 18:43:54.019369 2127 scope.go:115] "RemoveContainer" containerID="8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647" Feb 9 18:43:54.020424 env[1219]: time="2024-02-09T18:43:54.020397235Z" level=info msg="RemoveContainer for \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\"" Feb 9 18:43:54.022715 env[1219]: time="2024-02-09T18:43:54.022679056Z" level=info msg="RemoveContainer for \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\" returns successfully" Feb 9 18:43:54.022845 kubelet[2127]: I0209 18:43:54.022823 2127 scope.go:115] "RemoveContainer" containerID="69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4" Feb 9 18:43:54.027021 env[1219]: time="2024-02-09T18:43:54.026991136Z" level=info msg="RemoveContainer for \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\"" Feb 9 18:43:54.033714 env[1219]: time="2024-02-09T18:43:54.033683438Z" level=info msg="RemoveContainer for \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\" returns successfully" Feb 9 18:43:54.034034 kubelet[2127]: I0209 18:43:54.033965 2127 scope.go:115] "RemoveContainer" containerID="784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f" Feb 9 18:43:54.035303 env[1219]: time="2024-02-09T18:43:54.035278652Z" level=info msg="RemoveContainer for \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\"" Feb 9 18:43:54.037554 env[1219]: time="2024-02-09T18:43:54.037526793Z" level=info msg="RemoveContainer for \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\" returns successfully" Feb 9 18:43:54.037789 kubelet[2127]: I0209 18:43:54.037771 2127 scope.go:115] "RemoveContainer" containerID="e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809" Feb 9 18:43:54.037993 env[1219]: time="2024-02-09T18:43:54.037935637Z" level=error msg="ContainerStatus for \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\": not found" Feb 9 18:43:54.038106 kubelet[2127]: E0209 18:43:54.038088 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\": not found" containerID="e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809" Feb 9 18:43:54.038151 kubelet[2127]: I0209 18:43:54.038123 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809} err="failed to get container status \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\": rpc error: code = NotFound desc = an error occurred when try to find container \"e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809\": not found" Feb 9 18:43:54.038151 kubelet[2127]: I0209 18:43:54.038133 2127 scope.go:115] "RemoveContainer" containerID="f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0" Feb 9 18:43:54.038322 env[1219]: time="2024-02-09T18:43:54.038276200Z" level=error msg="ContainerStatus for \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\": not found" Feb 9 18:43:54.038421 kubelet[2127]: E0209 18:43:54.038406 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\": not found" containerID="f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0" Feb 9 18:43:54.038465 kubelet[2127]: I0209 18:43:54.038436 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0} err="failed to get container status \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4b2c8b266e0704d44c91b047d0eabc548708b00235de475e0da6c8c77a1a4c0\": not found" Feb 9 18:43:54.038465 kubelet[2127]: I0209 18:43:54.038446 2127 scope.go:115] "RemoveContainer" containerID="8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647" Feb 9 18:43:54.038633 env[1219]: time="2024-02-09T18:43:54.038578963Z" level=error msg="ContainerStatus for \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\": not found" Feb 9 18:43:54.038735 kubelet[2127]: E0209 18:43:54.038720 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\": not found" containerID="8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647" Feb 9 18:43:54.038777 kubelet[2127]: I0209 18:43:54.038747 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647} err="failed to get container status \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cb05e2a2387005e34a3bc2ff593023df7e16fa709ce56ce080ea2f8fcc2d647\": not found" Feb 9 18:43:54.038777 kubelet[2127]: I0209 18:43:54.038757 2127 scope.go:115] "RemoveContainer" containerID="69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4" Feb 9 18:43:54.038947 env[1219]: time="2024-02-09T18:43:54.038890885Z" level=error msg="ContainerStatus for \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\": not found" Feb 9 18:43:54.039046 kubelet[2127]: E0209 18:43:54.039031 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\": not found" containerID="69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4" Feb 9 18:43:54.039092 kubelet[2127]: I0209 18:43:54.039057 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4} err="failed to get container status \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"69a4a8b5c377259ceaa59dc040fa852350b5321a63352106452928c5fadcb3a4\": not found" Feb 9 18:43:54.039092 kubelet[2127]: I0209 18:43:54.039066 2127 scope.go:115] "RemoveContainer" containerID="784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f" Feb 9 18:43:54.039341 env[1219]: time="2024-02-09T18:43:54.039283689Z" level=error msg="ContainerStatus for \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\": not found" Feb 9 18:43:54.039517 kubelet[2127]: E0209 18:43:54.039504 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\": not found" containerID="784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f" Feb 9 18:43:54.039571 kubelet[2127]: I0209 18:43:54.039528 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f} err="failed to get container status \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"784845ad526d4b704c169d641770db275dd0ce30f889f612f895903c5b11ab7f\": not found" Feb 9 18:43:54.422047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e29eff114c5665115421ae4cf70251a78a9c63c4d34ab1c1cf880dc349fe0809-rootfs.mount: Deactivated successfully. Feb 9 18:43:54.422189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be8c1ce547b8a3a96da05c5f2b1618618d4a2377e98654f9bea08dbbd362c35d-rootfs.mount: Deactivated successfully. Feb 9 18:43:54.422284 systemd[1]: var-lib-kubelet-pods-4afe2958\x2d5dd4\x2d4ab6\x2daa85\x2dc5496a74ea92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssmsg.mount: Deactivated successfully. Feb 9 18:43:54.422370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f-rootfs.mount: Deactivated successfully. Feb 9 18:43:54.422445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dbb982ce309afb1e4a6abc0ade9b27742a4c00841d373a0d1c93cbfd80d756f-shm.mount: Deactivated successfully. Feb 9 18:43:54.422527 systemd[1]: var-lib-kubelet-pods-4c744df9\x2d6651\x2d4fb4\x2d947a\x2da910338090e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqf6z.mount: Deactivated successfully. Feb 9 18:43:54.422613 systemd[1]: var-lib-kubelet-pods-4c744df9\x2d6651\x2d4fb4\x2d947a\x2da910338090e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:43:54.422693 systemd[1]: var-lib-kubelet-pods-4c744df9\x2d6651\x2d4fb4\x2d947a\x2da910338090e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:43:54.822450 kubelet[2127]: I0209 18:43:54.822421 2127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4afe2958-5dd4-4ab6-aa85-c5496a74ea92 path="/var/lib/kubelet/pods/4afe2958-5dd4-4ab6-aa85-c5496a74ea92/volumes" Feb 9 18:43:54.822879 kubelet[2127]: I0209 18:43:54.822856 2127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4c744df9-6651-4fb4-947a-a910338090e6 path="/var/lib/kubelet/pods/4c744df9-6651-4fb4-947a-a910338090e6/volumes" Feb 9 18:43:54.859870 kubelet[2127]: E0209 18:43:54.859845 2127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:43:55.370644 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:55.373287 systemd[1]: Started sshd@22-10.0.0.121:22-10.0.0.1:60682.service. Feb 9 18:43:55.373949 systemd[1]: sshd@21-10.0.0.121:22-10.0.0.1:53504.service: Deactivated successfully. Feb 9 18:43:55.375095 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:43:55.375151 systemd-logind[1205]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:43:55.376516 systemd-logind[1205]: Removed session 22. Feb 9 18:43:55.414990 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 60682 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:55.416445 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:55.419934 systemd-logind[1205]: New session 23 of user core. Feb 9 18:43:55.420741 systemd[1]: Started session-23.scope. Feb 9 18:43:56.484736 kubelet[2127]: I0209 18:43:56.484709 2127 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:43:56.484626028 +0000 UTC m=+91.794314955 LastTransitionTime:2024-02-09 18:43:56.484626028 +0000 UTC m=+91.794314955 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:43:56.731505 sshd[4027]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:56.732153 systemd[1]: Started sshd@23-10.0.0.121:22-10.0.0.1:60688.service. Feb 9 18:43:56.738109 kubelet[2127]: I0209 18:43:56.738067 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:43:56.738295 kubelet[2127]: E0209 18:43:56.738241 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c744df9-6651-4fb4-947a-a910338090e6" containerName="clean-cilium-state" Feb 9 18:43:56.738371 kubelet[2127]: E0209 18:43:56.738361 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c744df9-6651-4fb4-947a-a910338090e6" containerName="mount-cgroup" Feb 9 18:43:56.738440 kubelet[2127]: E0209 18:43:56.738431 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c744df9-6651-4fb4-947a-a910338090e6" containerName="apply-sysctl-overwrites" Feb 9 18:43:56.738508 kubelet[2127]: E0209 18:43:56.738499 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4afe2958-5dd4-4ab6-aa85-c5496a74ea92" containerName="cilium-operator" Feb 9 18:43:56.738564 kubelet[2127]: E0209 18:43:56.738556 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c744df9-6651-4fb4-947a-a910338090e6" containerName="mount-bpf-fs" Feb 9 18:43:56.738631 kubelet[2127]: E0209 18:43:56.738614 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c744df9-6651-4fb4-947a-a910338090e6" containerName="cilium-agent" Feb 9 18:43:56.738684 systemd[1]: sshd@22-10.0.0.121:22-10.0.0.1:60682.service: Deactivated successfully. Feb 9 18:43:56.738795 kubelet[2127]: I0209 18:43:56.738785 2127 memory_manager.go:346] "RemoveStaleState removing state" podUID="4afe2958-5dd4-4ab6-aa85-c5496a74ea92" containerName="cilium-operator" Feb 9 18:43:56.738854 kubelet[2127]: I0209 18:43:56.738839 2127 memory_manager.go:346] "RemoveStaleState removing state" podUID="4c744df9-6651-4fb4-947a-a910338090e6" containerName="cilium-agent" Feb 9 18:43:56.739697 systemd-logind[1205]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:43:56.739725 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:43:56.750312 systemd-logind[1205]: Removed session 23. Feb 9 18:43:56.777316 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 60688 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:56.779232 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:56.783294 systemd-logind[1205]: New session 24 of user core. Feb 9 18:43:56.783576 systemd[1]: Started session-24.scope. Feb 9 18:43:56.878389 kubelet[2127]: I0209 18:43:56.878347 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-run\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878389 kubelet[2127]: I0209 18:43:56.878396 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-clustermesh-secrets\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878566 kubelet[2127]: I0209 18:43:56.878489 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-config-path\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878566 kubelet[2127]: I0209 18:43:56.878550 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-lib-modules\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878632 kubelet[2127]: I0209 18:43:56.878579 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-bpf-maps\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878632 kubelet[2127]: I0209 18:43:56.878601 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-hostproc\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878689 kubelet[2127]: I0209 18:43:56.878644 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-cgroup\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878689 kubelet[2127]: I0209 18:43:56.878672 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlwl7\" (UniqueName: \"kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-kube-api-access-rlwl7\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878789 kubelet[2127]: I0209 18:43:56.878764 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cni-path\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878824 kubelet[2127]: I0209 18:43:56.878799 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-hubble-tls\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878824 kubelet[2127]: I0209 18:43:56.878821 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-etc-cni-netd\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878878 kubelet[2127]: I0209 18:43:56.878842 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-xtables-lock\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878878 kubelet[2127]: I0209 18:43:56.878865 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-net\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878921 kubelet[2127]: I0209 18:43:56.878902 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-kernel\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.878957 kubelet[2127]: I0209 18:43:56.878945 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-ipsec-secrets\") pod \"cilium-wcbvg\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " pod="kube-system/cilium-wcbvg" Feb 9 18:43:56.905969 sshd[4040]: pam_unix(sshd:session): session closed for user core Feb 9 18:43:56.908469 systemd[1]: Started sshd@24-10.0.0.121:22-10.0.0.1:60694.service. Feb 9 18:43:56.916798 systemd[1]: sshd@23-10.0.0.121:22-10.0.0.1:60688.service: Deactivated successfully. Feb 9 18:43:56.919273 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:43:56.919536 systemd-logind[1205]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:43:56.922451 systemd-logind[1205]: Removed session 24. Feb 9 18:43:56.948977 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:43:56.950146 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:43:56.954172 systemd[1]: Started session-25.scope. Feb 9 18:43:56.954313 systemd-logind[1205]: New session 25 of user core. Feb 9 18:43:57.042793 kubelet[2127]: E0209 18:43:57.042760 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:57.043474 env[1219]: time="2024-02-09T18:43:57.043436444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wcbvg,Uid:44d5b875-740f-4a31-8095-afbe3f25d1a8,Namespace:kube-system,Attempt:0,}" Feb 9 18:43:57.056817 env[1219]: time="2024-02-09T18:43:57.056205187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:43:57.056817 env[1219]: time="2024-02-09T18:43:57.056297508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:43:57.056817 env[1219]: time="2024-02-09T18:43:57.056308668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:43:57.057466 env[1219]: time="2024-02-09T18:43:57.057115715Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e pid=4079 runtime=io.containerd.runc.v2 Feb 9 18:43:57.132053 env[1219]: time="2024-02-09T18:43:57.132011079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wcbvg,Uid:44d5b875-740f-4a31-8095-afbe3f25d1a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e\"" Feb 9 18:43:57.132621 kubelet[2127]: E0209 18:43:57.132595 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:57.137318 env[1219]: time="2024-02-09T18:43:57.137278921Z" level=info msg="CreateContainer within sandbox \"a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:43:57.151532 env[1219]: time="2024-02-09T18:43:57.151480396Z" level=info msg="CreateContainer within sandbox \"a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291\"" Feb 9 18:43:57.152536 env[1219]: time="2024-02-09T18:43:57.152490524Z" level=info msg="StartContainer for \"c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291\"" Feb 9 18:43:57.202429 env[1219]: time="2024-02-09T18:43:57.202380726Z" level=info msg="StartContainer for \"c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291\" returns successfully" Feb 9 18:43:57.233287 env[1219]: time="2024-02-09T18:43:57.233230415Z" level=info msg="shim disconnected" id=c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291 Feb 9 18:43:57.233287 env[1219]: time="2024-02-09T18:43:57.233288776Z" level=warning msg="cleaning up after shim disconnected" id=c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291 namespace=k8s.io Feb 9 18:43:57.233506 env[1219]: time="2024-02-09T18:43:57.233298416Z" level=info msg="cleaning up dead shim" Feb 9 18:43:57.240122 env[1219]: time="2024-02-09T18:43:57.240078591Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4165 runtime=io.containerd.runc.v2\n" Feb 9 18:43:58.009669 env[1219]: time="2024-02-09T18:43:58.009629637Z" level=info msg="StopPodSandbox for \"a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e\"" Feb 9 18:43:58.009902 env[1219]: time="2024-02-09T18:43:58.009878879Z" level=info msg="Container to stop \"c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:43:58.013096 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e-shm.mount: Deactivated successfully. Feb 9 18:43:58.034467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e-rootfs.mount: Deactivated successfully. Feb 9 18:43:58.038201 env[1219]: time="2024-02-09T18:43:58.038157777Z" level=info msg="shim disconnected" id=a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e Feb 9 18:43:58.038361 env[1219]: time="2024-02-09T18:43:58.038222377Z" level=warning msg="cleaning up after shim disconnected" id=a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e namespace=k8s.io Feb 9 18:43:58.038361 env[1219]: time="2024-02-09T18:43:58.038232937Z" level=info msg="cleaning up dead shim" Feb 9 18:43:58.044927 env[1219]: time="2024-02-09T18:43:58.044887109Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4197 runtime=io.containerd.runc.v2\n" Feb 9 18:43:58.045213 env[1219]: time="2024-02-09T18:43:58.045185351Z" level=info msg="TearDown network for sandbox \"a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e\" successfully" Feb 9 18:43:58.045213 env[1219]: time="2024-02-09T18:43:58.045206551Z" level=info msg="StopPodSandbox for \"a9e95bc468a3f32f192ca45b53f821fd650820795465a5c6bd5bf4778c0ce68e\" returns successfully" Feb 9 18:43:58.183441 kubelet[2127]: I0209 18:43:58.183395 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-cgroup\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.183441 kubelet[2127]: I0209 18:43:58.183432 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.183859 kubelet[2127]: I0209 18:43:58.183469 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-run\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.183859 kubelet[2127]: I0209 18:43:58.183490 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-bpf-maps\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.183859 kubelet[2127]: I0209 18:43:58.183510 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-etc-cni-netd\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.183859 kubelet[2127]: I0209 18:43:58.183545 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-config-path\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.183859 kubelet[2127]: I0209 18:43:58.183554 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.183859 kubelet[2127]: I0209 18:43:58.183563 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-lib-modules\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184015 kubelet[2127]: I0209 18:43:58.183571 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184015 kubelet[2127]: I0209 18:43:58.183580 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cni-path\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184015 kubelet[2127]: I0209 18:43:58.183613 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184015 kubelet[2127]: I0209 18:43:58.183631 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-hostproc\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184015 kubelet[2127]: I0209 18:43:58.183639 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184128 kubelet[2127]: I0209 18:43:58.183663 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-ipsec-secrets\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184128 kubelet[2127]: I0209 18:43:58.183685 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlwl7\" (UniqueName: \"kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-kube-api-access-rlwl7\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184128 kubelet[2127]: I0209 18:43:58.183699 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184128 kubelet[2127]: I0209 18:43:58.183705 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-hubble-tls\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184128 kubelet[2127]: I0209 18:43:58.183731 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-xtables-lock\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184128 kubelet[2127]: W0209 18:43:58.183727 2127 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/44d5b875-740f-4a31-8095-afbe3f25d1a8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:43:58.184291 kubelet[2127]: I0209 18:43:58.183752 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-clustermesh-secrets\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184291 kubelet[2127]: I0209 18:43:58.183983 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184291 kubelet[2127]: I0209 18:43:58.184011 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184291 kubelet[2127]: I0209 18:43:58.184201 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-net\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184291 kubelet[2127]: I0209 18:43:58.184226 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-kernel\") pod \"44d5b875-740f-4a31-8095-afbe3f25d1a8\" (UID: \"44d5b875-740f-4a31-8095-afbe3f25d1a8\") " Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184298 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184309 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184319 2127 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184328 2127 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184337 2127 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184346 2127 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184363 2127 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184412 kubelet[2127]: I0209 18:43:58.184373 2127 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.184654 kubelet[2127]: I0209 18:43:58.184391 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.184654 kubelet[2127]: I0209 18:43:58.184410 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:43:58.185605 kubelet[2127]: I0209 18:43:58.185547 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:43:58.187683 systemd[1]: var-lib-kubelet-pods-44d5b875\x2d740f\x2d4a31\x2d8095\x2dafbe3f25d1a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drlwl7.mount: Deactivated successfully. Feb 9 18:43:58.187834 systemd[1]: var-lib-kubelet-pods-44d5b875\x2d740f\x2d4a31\x2d8095\x2dafbe3f25d1a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:43:58.189538 kubelet[2127]: I0209 18:43:58.189509 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:43:58.189632 kubelet[2127]: I0209 18:43:58.189517 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-kube-api-access-rlwl7" (OuterVolumeSpecName: "kube-api-access-rlwl7") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "kube-api-access-rlwl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:43:58.189715 kubelet[2127]: I0209 18:43:58.189688 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:43:58.189763 systemd[1]: var-lib-kubelet-pods-44d5b875\x2d740f\x2d4a31\x2d8095\x2dafbe3f25d1a8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:43:58.190720 kubelet[2127]: I0209 18:43:58.190687 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44d5b875-740f-4a31-8095-afbe3f25d1a8" (UID: "44d5b875-740f-4a31-8095-afbe3f25d1a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:43:58.285261 kubelet[2127]: I0209 18:43:58.285175 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.285261 kubelet[2127]: I0209 18:43:58.285228 2127 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rlwl7\" (UniqueName: \"kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-kube-api-access-rlwl7\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.285261 kubelet[2127]: I0209 18:43:58.285241 2127 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44d5b875-740f-4a31-8095-afbe3f25d1a8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.285261 kubelet[2127]: I0209 18:43:58.285263 2127 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44d5b875-740f-4a31-8095-afbe3f25d1a8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.285538 kubelet[2127]: I0209 18:43:58.285277 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.285538 kubelet[2127]: I0209 18:43:58.285287 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44d5b875-740f-4a31-8095-afbe3f25d1a8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.285538 kubelet[2127]: I0209 18:43:58.285296 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d5b875-740f-4a31-8095-afbe3f25d1a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:43:58.821863 kubelet[2127]: E0209 18:43:58.821821 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:58.986736 systemd[1]: var-lib-kubelet-pods-44d5b875\x2d740f\x2d4a31\x2d8095\x2dafbe3f25d1a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:43:59.011920 kubelet[2127]: I0209 18:43:59.011894 2127 scope.go:115] "RemoveContainer" containerID="c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291" Feb 9 18:43:59.014728 env[1219]: time="2024-02-09T18:43:59.014682702Z" level=info msg="RemoveContainer for \"c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291\"" Feb 9 18:43:59.017875 env[1219]: time="2024-02-09T18:43:59.017837646Z" level=info msg="RemoveContainer for \"c694b888e4b450f2c6fead1c86f174d3199a0a2730209150dc3fcedaacf4f291\" returns successfully" Feb 9 18:43:59.036461 kubelet[2127]: I0209 18:43:59.033931 2127 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:43:59.036461 kubelet[2127]: E0209 18:43:59.034017 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44d5b875-740f-4a31-8095-afbe3f25d1a8" containerName="mount-cgroup" Feb 9 18:43:59.036461 kubelet[2127]: I0209 18:43:59.034099 2127 memory_manager.go:346] "RemoveStaleState removing state" podUID="44d5b875-740f-4a31-8095-afbe3f25d1a8" containerName="mount-cgroup" Feb 9 18:43:59.189995 kubelet[2127]: I0209 18:43:59.189889 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-etc-cni-netd\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190437 kubelet[2127]: I0209 18:43:59.190421 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f21b034-016a-4743-b628-a50be328292d-hubble-tls\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190579 kubelet[2127]: I0209 18:43:59.190535 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-bpf-maps\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190579 kubelet[2127]: I0209 18:43:59.190579 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-cilium-cgroup\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190663 kubelet[2127]: I0209 18:43:59.190601 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f21b034-016a-4743-b628-a50be328292d-cilium-config-path\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190663 kubelet[2127]: I0209 18:43:59.190633 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0f21b034-016a-4743-b628-a50be328292d-cilium-ipsec-secrets\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190718 kubelet[2127]: I0209 18:43:59.190686 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-hostproc\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190744 kubelet[2127]: I0209 18:43:59.190720 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-lib-modules\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190766 kubelet[2127]: I0209 18:43:59.190748 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f21b034-016a-4743-b628-a50be328292d-clustermesh-secrets\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190789 kubelet[2127]: I0209 18:43:59.190771 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-host-proc-sys-net\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190816 kubelet[2127]: I0209 18:43:59.190791 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-cilium-run\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190816 kubelet[2127]: I0209 18:43:59.190811 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-cni-path\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190861 kubelet[2127]: I0209 18:43:59.190834 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-xtables-lock\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190861 kubelet[2127]: I0209 18:43:59.190855 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f21b034-016a-4743-b628-a50be328292d-host-proc-sys-kernel\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.190907 kubelet[2127]: I0209 18:43:59.190874 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnmq\" (UniqueName: \"kubernetes.io/projected/0f21b034-016a-4743-b628-a50be328292d-kube-api-access-8rnmq\") pod \"cilium-kn5rm\" (UID: \"0f21b034-016a-4743-b628-a50be328292d\") " pod="kube-system/cilium-kn5rm" Feb 9 18:43:59.336555 kubelet[2127]: E0209 18:43:59.336514 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:59.337069 env[1219]: time="2024-02-09T18:43:59.337024637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kn5rm,Uid:0f21b034-016a-4743-b628-a50be328292d,Namespace:kube-system,Attempt:0,}" Feb 9 18:43:59.352397 env[1219]: time="2024-02-09T18:43:59.352322630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:43:59.352397 env[1219]: time="2024-02-09T18:43:59.352365030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:43:59.352397 env[1219]: time="2024-02-09T18:43:59.352376511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:43:59.352599 env[1219]: time="2024-02-09T18:43:59.352510992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f pid=4226 runtime=io.containerd.runc.v2 Feb 9 18:43:59.392918 env[1219]: time="2024-02-09T18:43:59.392874489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kn5rm,Uid:0f21b034-016a-4743-b628-a50be328292d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\"" Feb 9 18:43:59.393463 kubelet[2127]: E0209 18:43:59.393445 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:43:59.396405 env[1219]: time="2024-02-09T18:43:59.396367595Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:43:59.407520 env[1219]: time="2024-02-09T18:43:59.407441516Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"671416c689ae92d03c83bc292d043c85639eda8be0304eb9dfe1e7caf734f434\"" Feb 9 18:43:59.408041 env[1219]: time="2024-02-09T18:43:59.408015720Z" level=info msg="StartContainer for \"671416c689ae92d03c83bc292d043c85639eda8be0304eb9dfe1e7caf734f434\"" Feb 9 18:43:59.462103 env[1219]: time="2024-02-09T18:43:59.458709414Z" level=info msg="StartContainer for \"671416c689ae92d03c83bc292d043c85639eda8be0304eb9dfe1e7caf734f434\" returns successfully" Feb 9 18:43:59.485631 env[1219]: time="2024-02-09T18:43:59.485577092Z" level=info msg="shim disconnected" id=671416c689ae92d03c83bc292d043c85639eda8be0304eb9dfe1e7caf734f434 Feb 9 18:43:59.485631 env[1219]: time="2024-02-09T18:43:59.485630452Z" level=warning msg="cleaning up after shim disconnected" id=671416c689ae92d03c83bc292d043c85639eda8be0304eb9dfe1e7caf734f434 namespace=k8s.io Feb 9 18:43:59.485839 env[1219]: time="2024-02-09T18:43:59.485640732Z" level=info msg="cleaning up dead shim" Feb 9 18:43:59.492783 env[1219]: time="2024-02-09T18:43:59.492733425Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4308 runtime=io.containerd.runc.v2\n" Feb 9 18:43:59.861153 kubelet[2127]: E0209 18:43:59.861071 2127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:44:00.019669 kubelet[2127]: E0209 18:44:00.019509 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:00.023817 env[1219]: time="2024-02-09T18:44:00.023776249Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:44:00.040054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411132251.mount: Deactivated successfully. Feb 9 18:44:00.041123 env[1219]: time="2024-02-09T18:44:00.041049531Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6afc115fcfd8966d53b899b7e8d3bd2ce6d874219ea0c41cdab7938d740fbb4f\"" Feb 9 18:44:00.041649 env[1219]: time="2024-02-09T18:44:00.041617935Z" level=info msg="StartContainer for \"6afc115fcfd8966d53b899b7e8d3bd2ce6d874219ea0c41cdab7938d740fbb4f\"" Feb 9 18:44:00.086551 env[1219]: time="2024-02-09T18:44:00.086495611Z" level=info msg="StartContainer for \"6afc115fcfd8966d53b899b7e8d3bd2ce6d874219ea0c41cdab7938d740fbb4f\" returns successfully" Feb 9 18:44:00.112591 env[1219]: time="2024-02-09T18:44:00.112142111Z" level=info msg="shim disconnected" id=6afc115fcfd8966d53b899b7e8d3bd2ce6d874219ea0c41cdab7938d740fbb4f Feb 9 18:44:00.112591 env[1219]: time="2024-02-09T18:44:00.112187071Z" level=warning msg="cleaning up after shim disconnected" id=6afc115fcfd8966d53b899b7e8d3bd2ce6d874219ea0c41cdab7938d740fbb4f namespace=k8s.io Feb 9 18:44:00.112591 env[1219]: time="2024-02-09T18:44:00.112197031Z" level=info msg="cleaning up dead shim" Feb 9 18:44:00.119242 env[1219]: time="2024-02-09T18:44:00.119191841Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:44:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4371 runtime=io.containerd.runc.v2\n" Feb 9 18:44:00.822534 kubelet[2127]: I0209 18:44:00.822497 2127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=44d5b875-740f-4a31-8095-afbe3f25d1a8 path="/var/lib/kubelet/pods/44d5b875-740f-4a31-8095-afbe3f25d1a8/volumes" Feb 9 18:44:01.022232 kubelet[2127]: E0209 18:44:01.022208 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:01.025487 env[1219]: time="2024-02-09T18:44:01.025404967Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:44:01.037377 env[1219]: time="2024-02-09T18:44:01.036010758Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d54bb6bb351e0bb09a627c228dae449f4d86b66ca7f074b74378398d875d471\"" Feb 9 18:44:01.037639 env[1219]: time="2024-02-09T18:44:01.037599049Z" level=info msg="StartContainer for \"6d54bb6bb351e0bb09a627c228dae449f4d86b66ca7f074b74378398d875d471\"" Feb 9 18:44:01.088062 env[1219]: time="2024-02-09T18:44:01.087317743Z" level=info msg="StartContainer for \"6d54bb6bb351e0bb09a627c228dae449f4d86b66ca7f074b74378398d875d471\" returns successfully" Feb 9 18:44:01.107996 env[1219]: time="2024-02-09T18:44:01.107957161Z" level=info msg="shim disconnected" id=6d54bb6bb351e0bb09a627c228dae449f4d86b66ca7f074b74378398d875d471 Feb 9 18:44:01.108240 env[1219]: time="2024-02-09T18:44:01.108220003Z" level=warning msg="cleaning up after shim disconnected" id=6d54bb6bb351e0bb09a627c228dae449f4d86b66ca7f074b74378398d875d471 namespace=k8s.io Feb 9 18:44:01.108364 env[1219]: time="2024-02-09T18:44:01.108347404Z" level=info msg="cleaning up dead shim" Feb 9 18:44:01.115863 env[1219]: time="2024-02-09T18:44:01.115834454Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:44:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4429 runtime=io.containerd.runc.v2\n" Feb 9 18:44:01.819664 kubelet[2127]: E0209 18:44:01.819611 2127 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-kr6n9" podUID=66578035-61ab-45a1-a52f-f1499f3c5cb0 Feb 9 18:44:01.986972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d54bb6bb351e0bb09a627c228dae449f4d86b66ca7f074b74378398d875d471-rootfs.mount: Deactivated successfully. Feb 9 18:44:02.025647 kubelet[2127]: E0209 18:44:02.025597 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:02.028992 env[1219]: time="2024-02-09T18:44:02.028952694Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:44:02.038793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058465047.mount: Deactivated successfully. Feb 9 18:44:02.043332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493454807.mount: Deactivated successfully. Feb 9 18:44:02.046829 env[1219]: time="2024-02-09T18:44:02.046782528Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30526c8f85ed50c923213c8c5f7bbce0346cf71fb8c06eb3cbc7096222c9637b\"" Feb 9 18:44:02.047566 env[1219]: time="2024-02-09T18:44:02.047536173Z" level=info msg="StartContainer for \"30526c8f85ed50c923213c8c5f7bbce0346cf71fb8c06eb3cbc7096222c9637b\"" Feb 9 18:44:02.096062 env[1219]: time="2024-02-09T18:44:02.095958923Z" level=info msg="StartContainer for \"30526c8f85ed50c923213c8c5f7bbce0346cf71fb8c06eb3cbc7096222c9637b\" returns successfully" Feb 9 18:44:02.115577 env[1219]: time="2024-02-09T18:44:02.115532488Z" level=info msg="shim disconnected" id=30526c8f85ed50c923213c8c5f7bbce0346cf71fb8c06eb3cbc7096222c9637b Feb 9 18:44:02.115855 env[1219]: time="2024-02-09T18:44:02.115835130Z" level=warning msg="cleaning up after shim disconnected" id=30526c8f85ed50c923213c8c5f7bbce0346cf71fb8c06eb3cbc7096222c9637b namespace=k8s.io Feb 9 18:44:02.115937 env[1219]: time="2024-02-09T18:44:02.115923730Z" level=info msg="cleaning up dead shim" Feb 9 18:44:02.122868 env[1219]: time="2024-02-09T18:44:02.122829655Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4484 runtime=io.containerd.runc.v2\n" Feb 9 18:44:03.029485 kubelet[2127]: E0209 18:44:03.029461 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:03.031539 env[1219]: time="2024-02-09T18:44:03.031500103Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:44:03.044370 env[1219]: time="2024-02-09T18:44:03.044328821Z" level=info msg="CreateContainer within sandbox \"e4e242dea17de6b8da3a36e6a479453ddbab7ef30fefb87358271b0ee638098f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"54539d9a74a33873f18e9af446658922c05fcaba62862ee906ede77a288228aa\"" Feb 9 18:44:03.044966 env[1219]: time="2024-02-09T18:44:03.044931345Z" level=info msg="StartContainer for \"54539d9a74a33873f18e9af446658922c05fcaba62862ee906ede77a288228aa\"" Feb 9 18:44:03.096539 env[1219]: time="2024-02-09T18:44:03.096490619Z" level=info msg="StartContainer for \"54539d9a74a33873f18e9af446658922c05fcaba62862ee906ede77a288228aa\" returns successfully" Feb 9 18:44:03.337270 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:44:03.820123 kubelet[2127]: E0209 18:44:03.820038 2127 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-kr6n9" podUID=66578035-61ab-45a1-a52f-f1499f3c5cb0 Feb 9 18:44:03.820123 kubelet[2127]: E0209 18:44:03.820091 2127 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-p4cg7" podUID=e9eb6d98-7422-4fa6-9f57-e56a75643c47 Feb 9 18:44:03.820345 kubelet[2127]: E0209 18:44:03.820289 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:03.987222 systemd[1]: run-containerd-runc-k8s.io-54539d9a74a33873f18e9af446658922c05fcaba62862ee906ede77a288228aa-runc.gu04cY.mount: Deactivated successfully. Feb 9 18:44:04.034120 kubelet[2127]: E0209 18:44:04.034062 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:04.048459 kubelet[2127]: I0209 18:44:04.048424 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kn5rm" podStartSLOduration=5.048387568 pod.CreationTimestamp="2024-02-09 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:44:04.047564203 +0000 UTC m=+99.357253130" watchObservedRunningTime="2024-02-09 18:44:04.048387568 +0000 UTC m=+99.358076495" Feb 9 18:44:05.035484 kubelet[2127]: E0209 18:44:05.035441 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:05.820589 kubelet[2127]: E0209 18:44:05.820549 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:05.820770 kubelet[2127]: E0209 18:44:05.820611 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:06.037307 kubelet[2127]: E0209 18:44:06.037269 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:06.044684 systemd-networkd[1101]: lxc_health: Link UP Feb 9 18:44:06.051519 systemd-networkd[1101]: lxc_health: Gained carrier Feb 9 18:44:06.052305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:44:07.339168 kubelet[2127]: E0209 18:44:07.339139 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:07.609385 systemd-networkd[1101]: lxc_health: Gained IPv6LL Feb 9 18:44:08.040326 kubelet[2127]: E0209 18:44:08.040299 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:09.042331 kubelet[2127]: E0209 18:44:09.042304 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:44:11.668236 sshd[4054]: pam_unix(sshd:session): session closed for user core Feb 9 18:44:11.671127 systemd[1]: sshd@24-10.0.0.121:22-10.0.0.1:60694.service: Deactivated successfully. Feb 9 18:44:11.671904 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:44:11.672735 systemd-logind[1205]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:44:11.673414 systemd-logind[1205]: Removed session 25.