Feb 9 10:08:15.734426 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 10:08:15.734449 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 10:08:15.734456 kernel: efi: EFI v2.70 by EDK II Feb 9 10:08:15.734462 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 10:08:15.734467 kernel: random: crng init done Feb 9 10:08:15.734472 kernel: ACPI: Early table checksum verification disabled Feb 9 10:08:15.734478 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 10:08:15.734485 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 10:08:15.734491 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734496 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734504 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734509 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734515 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734520 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734528 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734534 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734539 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:08:15.734545 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 10:08:15.734550 kernel: NUMA: Failed to initialise from firmware Feb 9 10:08:15.734556 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:08:15.734562 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 10:08:15.734567 kernel: Zone ranges: Feb 9 10:08:15.734574 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:08:15.734582 kernel: DMA32 empty Feb 9 10:08:15.734588 kernel: Normal empty Feb 9 10:08:15.734594 kernel: Movable zone start for each node Feb 9 10:08:15.734599 kernel: Early memory node ranges Feb 9 10:08:15.734605 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 10:08:15.734610 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 10:08:15.734616 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 10:08:15.734622 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 10:08:15.734627 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 10:08:15.734633 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 10:08:15.734638 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 10:08:15.734644 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:08:15.734653 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 10:08:15.734659 kernel: psci: probing for conduit method from ACPI. Feb 9 10:08:15.734665 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 10:08:15.734670 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 10:08:15.734676 kernel: psci: Trusted OS migration not required Feb 9 10:08:15.734684 kernel: psci: SMC Calling Convention v1.1 Feb 9 10:08:15.734690 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 10:08:15.734697 kernel: ACPI: SRAT not present Feb 9 10:08:15.734704 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 10:08:15.734710 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 10:08:15.734716 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 10:08:15.734725 kernel: Detected PIPT I-cache on CPU0 Feb 9 10:08:15.734731 kernel: CPU features: detected: GIC system register CPU interface Feb 9 10:08:15.734737 kernel: CPU features: detected: Hardware dirty bit management Feb 9 10:08:15.734743 kernel: CPU features: detected: Spectre-v4 Feb 9 10:08:15.734749 kernel: CPU features: detected: Spectre-BHB Feb 9 10:08:15.734756 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 10:08:15.734762 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 10:08:15.734768 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 10:08:15.734774 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 10:08:15.734780 kernel: Policy zone: DMA Feb 9 10:08:15.734787 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:08:15.734796 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 10:08:15.734802 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 10:08:15.734830 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 10:08:15.734837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 10:08:15.734843 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 10:08:15.734851 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 10:08:15.734857 kernel: trace event string verifier disabled Feb 9 10:08:15.734865 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 10:08:15.734872 kernel: rcu: RCU event tracing is enabled. Feb 9 10:08:15.734878 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 10:08:15.734884 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 10:08:15.734890 kernel: Tracing variant of Tasks RCU enabled. Feb 9 10:08:15.734896 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 10:08:15.734902 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 10:08:15.734908 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 10:08:15.734914 kernel: GICv3: 256 SPIs implemented Feb 9 10:08:15.734922 kernel: GICv3: 0 Extended SPIs implemented Feb 9 10:08:15.734928 kernel: GICv3: Distributor has no Range Selector support Feb 9 10:08:15.734935 kernel: Root IRQ handler: gic_handle_irq Feb 9 10:08:15.734941 kernel: GICv3: 16 PPIs implemented Feb 9 10:08:15.734947 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 10:08:15.734953 kernel: ACPI: SRAT not present Feb 9 10:08:15.734959 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 10:08:15.734965 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 10:08:15.734971 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 10:08:15.734977 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 10:08:15.734983 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 10:08:15.734989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:08:15.734996 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 10:08:15.735002 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 10:08:15.735011 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 10:08:15.735017 kernel: arm-pv: using stolen time PV Feb 9 10:08:15.735024 kernel: Console: colour dummy device 80x25 Feb 9 10:08:15.735030 kernel: ACPI: Core revision 20210730 Feb 9 10:08:15.735109 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 10:08:15.735116 kernel: pid_max: default: 32768 minimum: 301 Feb 9 10:08:15.735122 kernel: LSM: Security Framework initializing Feb 9 10:08:15.735128 kernel: SELinux: Initializing. Feb 9 10:08:15.735137 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:08:15.735143 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:08:15.735153 kernel: rcu: Hierarchical SRCU implementation. Feb 9 10:08:15.735159 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 10:08:15.735165 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 10:08:15.735172 kernel: Remapping and enabling EFI services. Feb 9 10:08:15.735178 kernel: smp: Bringing up secondary CPUs ... Feb 9 10:08:15.735184 kernel: Detected PIPT I-cache on CPU1 Feb 9 10:08:15.735190 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 10:08:15.735199 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 10:08:15.735205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:08:15.735211 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 10:08:15.735219 kernel: Detected PIPT I-cache on CPU2 Feb 9 10:08:15.735227 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 10:08:15.735233 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 10:08:15.735239 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:08:15.735246 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 10:08:15.735252 kernel: Detected PIPT I-cache on CPU3 Feb 9 10:08:15.735258 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 10:08:15.735266 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 10:08:15.735272 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:08:15.735278 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 10:08:15.735284 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 10:08:15.735297 kernel: SMP: Total of 4 processors activated. Feb 9 10:08:15.735306 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 10:08:15.735312 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 10:08:15.735319 kernel: CPU features: detected: Common not Private translations Feb 9 10:08:15.735325 kernel: CPU features: detected: CRC32 instructions Feb 9 10:08:15.735332 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 10:08:15.735338 kernel: CPU features: detected: LSE atomic instructions Feb 9 10:08:15.735345 kernel: CPU features: detected: Privileged Access Never Feb 9 10:08:15.735352 kernel: CPU features: detected: RAS Extension Support Feb 9 10:08:15.735359 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 10:08:15.735369 kernel: CPU: All CPU(s) started at EL1 Feb 9 10:08:15.735375 kernel: alternatives: patching kernel code Feb 9 10:08:15.735382 kernel: devtmpfs: initialized Feb 9 10:08:15.735390 kernel: KASLR enabled Feb 9 10:08:15.735397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 10:08:15.735404 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 10:08:15.735410 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 10:08:15.735417 kernel: SMBIOS 3.0.0 present. Feb 9 10:08:15.735423 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 10:08:15.735430 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 10:08:15.735439 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 10:08:15.735445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 10:08:15.735453 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 10:08:15.735460 kernel: audit: initializing netlink subsys (disabled) Feb 9 10:08:15.735466 kernel: audit: type=2000 audit(0.040:1): state=initialized audit_enabled=0 res=1 Feb 9 10:08:15.735473 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 10:08:15.735480 kernel: cpuidle: using governor menu Feb 9 10:08:15.735486 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 10:08:15.735493 kernel: ASID allocator initialised with 32768 entries Feb 9 10:08:15.735499 kernel: ACPI: bus type PCI registered Feb 9 10:08:15.735508 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 10:08:15.735516 kernel: Serial: AMBA PL011 UART driver Feb 9 10:08:15.735522 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 10:08:15.735529 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 10:08:15.735535 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 10:08:15.735542 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 10:08:15.735548 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 10:08:15.735555 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 10:08:15.735562 kernel: ACPI: Added _OSI(Module Device) Feb 9 10:08:15.735568 kernel: ACPI: Added _OSI(Processor Device) Feb 9 10:08:15.735576 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 10:08:15.735585 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 10:08:15.735591 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 10:08:15.735598 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 10:08:15.735604 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 10:08:15.735611 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 10:08:15.735617 kernel: ACPI: Interpreter enabled Feb 9 10:08:15.735624 kernel: ACPI: Using GIC for interrupt routing Feb 9 10:08:15.735631 kernel: ACPI: MCFG table detected, 1 entries Feb 9 10:08:15.735638 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 10:08:15.735645 kernel: printk: console [ttyAMA0] enabled Feb 9 10:08:15.735651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 10:08:15.735788 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 10:08:15.735882 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 10:08:15.735947 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 10:08:15.736041 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 10:08:15.736112 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 10:08:15.736123 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 10:08:15.736130 kernel: PCI host bridge to bus 0000:00 Feb 9 10:08:15.736201 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 10:08:15.736261 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 10:08:15.736320 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 10:08:15.736380 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 10:08:15.736464 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 10:08:15.736537 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 10:08:15.736603 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 10:08:15.736666 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 10:08:15.736730 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:08:15.736795 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:08:15.736877 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 10:08:15.736945 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 10:08:15.737004 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 10:08:15.737058 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 10:08:15.737115 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 10:08:15.737124 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 10:08:15.737130 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 10:08:15.737137 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 10:08:15.737145 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 10:08:15.737152 kernel: iommu: Default domain type: Translated Feb 9 10:08:15.737161 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 10:08:15.737168 kernel: vgaarb: loaded Feb 9 10:08:15.737175 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 10:08:15.737182 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 10:08:15.737188 kernel: PTP clock support registered Feb 9 10:08:15.737195 kernel: Registered efivars operations Feb 9 10:08:15.737201 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 10:08:15.737208 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 10:08:15.737216 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 10:08:15.737222 kernel: pnp: PnP ACPI init Feb 9 10:08:15.737293 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 10:08:15.737303 kernel: pnp: PnP ACPI: found 1 devices Feb 9 10:08:15.737310 kernel: NET: Registered PF_INET protocol family Feb 9 10:08:15.737316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 10:08:15.737326 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 10:08:15.737333 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 10:08:15.737341 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 10:08:15.737348 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 10:08:15.737354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 10:08:15.737361 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:08:15.737368 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:08:15.737374 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 10:08:15.737381 kernel: PCI: CLS 0 bytes, default 64 Feb 9 10:08:15.737387 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 10:08:15.737394 kernel: kvm [1]: HYP mode not available Feb 9 10:08:15.737403 kernel: Initialise system trusted keyrings Feb 9 10:08:15.737411 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 10:08:15.737417 kernel: Key type asymmetric registered Feb 9 10:08:15.737424 kernel: Asymmetric key parser 'x509' registered Feb 9 10:08:15.737431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 10:08:15.737452 kernel: io scheduler mq-deadline registered Feb 9 10:08:15.737459 kernel: io scheduler kyber registered Feb 9 10:08:15.737465 kernel: io scheduler bfq registered Feb 9 10:08:15.737472 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 10:08:15.737480 kernel: ACPI: button: Power Button [PWRB] Feb 9 10:08:15.737490 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 10:08:15.737556 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 10:08:15.737564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 10:08:15.737573 kernel: thunder_xcv, ver 1.0 Feb 9 10:08:15.737579 kernel: thunder_bgx, ver 1.0 Feb 9 10:08:15.737586 kernel: nicpf, ver 1.0 Feb 9 10:08:15.737592 kernel: nicvf, ver 1.0 Feb 9 10:08:15.737663 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 10:08:15.737723 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T10:08:15 UTC (1707473295) Feb 9 10:08:15.737731 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 10:08:15.737741 kernel: NET: Registered PF_INET6 protocol family Feb 9 10:08:15.737748 kernel: Segment Routing with IPv6 Feb 9 10:08:15.737754 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 10:08:15.737761 kernel: NET: Registered PF_PACKET protocol family Feb 9 10:08:15.737767 kernel: Key type dns_resolver registered Feb 9 10:08:15.737774 kernel: registered taskstats version 1 Feb 9 10:08:15.737782 kernel: Loading compiled-in X.509 certificates Feb 9 10:08:15.737789 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 10:08:15.737795 kernel: Key type .fscrypt registered Feb 9 10:08:15.737801 kernel: Key type fscrypt-provisioning registered Feb 9 10:08:15.737871 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 10:08:15.737878 kernel: ima: Allocated hash algorithm: sha1 Feb 9 10:08:15.737885 kernel: ima: No architecture policies found Feb 9 10:08:15.737891 kernel: Freeing unused kernel memory: 34688K Feb 9 10:08:15.737898 kernel: Run /init as init process Feb 9 10:08:15.737910 kernel: with arguments: Feb 9 10:08:15.737917 kernel: /init Feb 9 10:08:15.737923 kernel: with environment: Feb 9 10:08:15.737930 kernel: HOME=/ Feb 9 10:08:15.737936 kernel: TERM=linux Feb 9 10:08:15.737943 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 10:08:15.737951 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:08:15.737960 systemd[1]: Detected virtualization kvm. Feb 9 10:08:15.737968 systemd[1]: Detected architecture arm64. Feb 9 10:08:15.737975 systemd[1]: Running in initrd. Feb 9 10:08:15.737982 systemd[1]: No hostname configured, using default hostname. Feb 9 10:08:15.737991 systemd[1]: Hostname set to . Feb 9 10:08:15.737999 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:08:15.738006 systemd[1]: Queued start job for default target initrd.target. Feb 9 10:08:15.738013 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:08:15.738021 systemd[1]: Reached target cryptsetup.target. Feb 9 10:08:15.738029 systemd[1]: Reached target paths.target. Feb 9 10:08:15.738036 systemd[1]: Reached target slices.target. Feb 9 10:08:15.738043 systemd[1]: Reached target swap.target. Feb 9 10:08:15.738050 systemd[1]: Reached target timers.target. Feb 9 10:08:15.738057 systemd[1]: Listening on iscsid.socket. Feb 9 10:08:15.738064 systemd[1]: Listening on iscsiuio.socket. Feb 9 10:08:15.738073 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 10:08:15.738082 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 10:08:15.738089 systemd[1]: Listening on systemd-journald.socket. Feb 9 10:08:15.738096 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:08:15.738103 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:08:15.738110 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:08:15.738117 systemd[1]: Reached target sockets.target. Feb 9 10:08:15.738123 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:08:15.738130 systemd[1]: Finished network-cleanup.service. Feb 9 10:08:15.738137 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 10:08:15.738145 systemd[1]: Starting systemd-journald.service... Feb 9 10:08:15.738152 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:08:15.738162 systemd[1]: Starting systemd-resolved.service... Feb 9 10:08:15.738169 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 10:08:15.738176 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:08:15.738183 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 10:08:15.738190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:08:15.738196 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 10:08:15.738204 kernel: audit: type=1130 audit(1707473295.735:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.738212 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 10:08:15.738222 systemd-journald[290]: Journal started Feb 9 10:08:15.738270 systemd-journald[290]: Runtime Journal (/run/log/journal/362411a1c9bf47d2b19aa3d17147a7b0) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:08:15.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.725764 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 10:08:15.740245 systemd[1]: Started systemd-journald.service. Feb 9 10:08:15.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.742823 kernel: audit: type=1130 audit(1707473295.740:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.743103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:08:15.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.746842 kernel: audit: type=1130 audit(1707473295.743:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.749846 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 10:08:15.755856 kernel: Bridge firewalling registered Feb 9 10:08:15.754996 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 10:08:15.758842 systemd-resolved[292]: Positive Trust Anchors: Feb 9 10:08:15.758859 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:08:15.758888 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:08:15.761234 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 10:08:15.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.763159 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 10:08:15.770806 kernel: audit: type=1130 audit(1707473295.765:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.770838 kernel: SCSI subsystem initialized Feb 9 10:08:15.770850 kernel: audit: type=1130 audit(1707473295.768:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.765526 systemd[1]: Started systemd-resolved.service. Feb 9 10:08:15.768716 systemd[1]: Reached target nss-lookup.target. Feb 9 10:08:15.772317 systemd[1]: Starting dracut-cmdline.service... Feb 9 10:08:15.777843 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 10:08:15.777877 kernel: device-mapper: uevent: version 1.0.3 Feb 9 10:08:15.777887 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 10:08:15.780790 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 10:08:15.781468 dracut-cmdline[307]: dracut-dracut-053 Feb 9 10:08:15.781698 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:08:15.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.785782 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:08:15.789479 kernel: audit: type=1130 audit(1707473295.782:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.783517 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:08:15.791350 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:08:15.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.794843 kernel: audit: type=1130 audit(1707473295.792:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.843835 kernel: Loading iSCSI transport class v2.0-870. Feb 9 10:08:15.851843 kernel: iscsi: registered transport (tcp) Feb 9 10:08:15.864834 kernel: iscsi: registered transport (qla4xxx) Feb 9 10:08:15.864866 kernel: QLogic iSCSI HBA Driver Feb 9 10:08:15.897866 systemd[1]: Finished dracut-cmdline.service. Feb 9 10:08:15.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.899401 systemd[1]: Starting dracut-pre-udev.service... Feb 9 10:08:15.902176 kernel: audit: type=1130 audit(1707473295.897:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:15.942851 kernel: raid6: neonx8 gen() 11794 MB/s Feb 9 10:08:15.959833 kernel: raid6: neonx8 xor() 9656 MB/s Feb 9 10:08:15.976835 kernel: raid6: neonx4 gen() 12096 MB/s Feb 9 10:08:15.993826 kernel: raid6: neonx4 xor() 11172 MB/s Feb 9 10:08:16.010826 kernel: raid6: neonx2 gen() 12767 MB/s Feb 9 10:08:16.027830 kernel: raid6: neonx2 xor() 9969 MB/s Feb 9 10:08:16.044829 kernel: raid6: neonx1 gen() 10419 MB/s Feb 9 10:08:16.061840 kernel: raid6: neonx1 xor() 8765 MB/s Feb 9 10:08:16.078827 kernel: raid6: int64x8 gen() 5815 MB/s Feb 9 10:08:16.095832 kernel: raid6: int64x8 xor() 3541 MB/s Feb 9 10:08:16.112838 kernel: raid6: int64x4 gen() 7160 MB/s Feb 9 10:08:16.129837 kernel: raid6: int64x4 xor() 3853 MB/s Feb 9 10:08:16.146842 kernel: raid6: int64x2 gen() 6139 MB/s Feb 9 10:08:16.163836 kernel: raid6: int64x2 xor() 3316 MB/s Feb 9 10:08:16.180985 kernel: raid6: int64x1 gen() 5039 MB/s Feb 9 10:08:16.198029 kernel: raid6: int64x1 xor() 2618 MB/s Feb 9 10:08:16.198163 kernel: raid6: using algorithm neonx2 gen() 12767 MB/s Feb 9 10:08:16.198175 kernel: raid6: .... xor() 9969 MB/s, rmw enabled Feb 9 10:08:16.198183 kernel: raid6: using neon recovery algorithm Feb 9 10:08:16.209179 kernel: xor: measuring software checksum speed Feb 9 10:08:16.209215 kernel: 8regs : 17286 MB/sec Feb 9 10:08:16.210011 kernel: 32regs : 20749 MB/sec Feb 9 10:08:16.211170 kernel: arm64_neon : 27901 MB/sec Feb 9 10:08:16.211182 kernel: xor: using function: arm64_neon (27901 MB/sec) Feb 9 10:08:16.266851 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 10:08:16.276265 systemd[1]: Finished dracut-pre-udev.service. Feb 9 10:08:16.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:16.277963 systemd[1]: Starting systemd-udevd.service... Feb 9 10:08:16.280651 kernel: audit: type=1130 audit(1707473296.276:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:16.276000 audit: BPF prog-id=7 op=LOAD Feb 9 10:08:16.276000 audit: BPF prog-id=8 op=LOAD Feb 9 10:08:16.293849 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 10:08:16.298193 systemd[1]: Started systemd-udevd.service. Feb 9 10:08:16.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:16.299543 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 10:08:16.311027 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 10:08:16.337825 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 10:08:16.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:16.339461 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:08:16.374213 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:08:16.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:16.403952 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 10:08:16.406891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 10:08:16.406919 kernel: GPT:9289727 != 19775487 Feb 9 10:08:16.406929 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 10:08:16.406938 kernel: GPT:9289727 != 19775487 Feb 9 10:08:16.407831 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 10:08:16.407844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:08:16.416845 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (538) Feb 9 10:08:16.417780 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 10:08:16.423186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 10:08:16.426722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 10:08:16.427767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 10:08:16.432251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:08:16.437569 systemd[1]: Starting disk-uuid.service... Feb 9 10:08:16.472136 disk-uuid[562]: Primary Header is updated. Feb 9 10:08:16.472136 disk-uuid[562]: Secondary Entries is updated. Feb 9 10:08:16.472136 disk-uuid[562]: Secondary Header is updated. Feb 9 10:08:16.475839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:08:16.482837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:08:17.486833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:08:17.486889 disk-uuid[563]: The operation has completed successfully. Feb 9 10:08:17.505754 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 10:08:17.505863 systemd[1]: Finished disk-uuid.service. Feb 9 10:08:17.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.511860 systemd[1]: Starting verity-setup.service... Feb 9 10:08:17.525923 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 10:08:17.547070 systemd[1]: Found device dev-mapper-usr.device. Feb 9 10:08:17.548614 systemd[1]: Mounting sysusr-usr.mount... Feb 9 10:08:17.549406 systemd[1]: Finished verity-setup.service. Feb 9 10:08:17.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.596638 systemd[1]: Mounted sysusr-usr.mount. Feb 9 10:08:17.597788 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 10:08:17.597488 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 10:08:17.598140 systemd[1]: Starting ignition-setup.service... Feb 9 10:08:17.599963 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 10:08:17.605836 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:08:17.605867 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:08:17.606828 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:08:17.613673 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 10:08:17.618524 systemd[1]: Finished ignition-setup.service. Feb 9 10:08:17.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.619981 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 10:08:17.683659 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 10:08:17.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.684000 audit: BPF prog-id=9 op=LOAD Feb 9 10:08:17.685711 systemd[1]: Starting systemd-networkd.service... Feb 9 10:08:17.694923 ignition[648]: Ignition 2.14.0 Feb 9 10:08:17.694934 ignition[648]: Stage: fetch-offline Feb 9 10:08:17.694972 ignition[648]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:08:17.694982 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:08:17.695120 ignition[648]: parsed url from cmdline: "" Feb 9 10:08:17.695125 ignition[648]: no config URL provided Feb 9 10:08:17.695130 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 10:08:17.695137 ignition[648]: no config at "/usr/lib/ignition/user.ign" Feb 9 10:08:17.695154 ignition[648]: op(1): [started] loading QEMU firmware config module Feb 9 10:08:17.695159 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 10:08:17.700157 ignition[648]: op(1): [finished] loading QEMU firmware config module Feb 9 10:08:17.710993 systemd-networkd[740]: lo: Link UP Feb 9 10:08:17.711006 systemd-networkd[740]: lo: Gained carrier Feb 9 10:08:17.711352 systemd-networkd[740]: Enumeration completed Feb 9 10:08:17.711521 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:08:17.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.712577 systemd-networkd[740]: eth0: Link UP Feb 9 10:08:17.712580 systemd-networkd[740]: eth0: Gained carrier Feb 9 10:08:17.715240 systemd[1]: Started systemd-networkd.service. Feb 9 10:08:17.716363 systemd[1]: Reached target network.target. Feb 9 10:08:17.717772 systemd[1]: Starting iscsiuio.service... Feb 9 10:08:17.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.726893 systemd[1]: Started iscsiuio.service. Feb 9 10:08:17.728418 systemd[1]: Starting iscsid.service... Feb 9 10:08:17.731453 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:08:17.731453 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 10:08:17.731453 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 10:08:17.731453 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 10:08:17.731453 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:08:17.731453 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 10:08:17.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.734869 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:08:17.738468 systemd[1]: Started iscsid.service. Feb 9 10:08:17.740383 systemd[1]: Starting dracut-initqueue.service... Feb 9 10:08:17.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.749974 systemd[1]: Finished dracut-initqueue.service. Feb 9 10:08:17.750938 systemd[1]: Reached target remote-fs-pre.target. Feb 9 10:08:17.751851 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:08:17.752768 systemd[1]: Reached target remote-fs.target. Feb 9 10:08:17.754214 systemd[1]: Starting dracut-pre-mount.service... Feb 9 10:08:17.761449 systemd[1]: Finished dracut-pre-mount.service. Feb 9 10:08:17.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.796449 ignition[648]: parsing config with SHA512: 4c1f5da562dc93a7afebf7f966d3108e5e6c9542047ab5855e7a0b6299a9c3e96e7848c1cd2a69da23877107fd5cd7427bea4aa908615abf4f45d534ca7df2f1 Feb 9 10:08:17.841985 unknown[648]: fetched base config from "system" Feb 9 10:08:17.841995 unknown[648]: fetched user config from "qemu" Feb 9 10:08:17.842561 ignition[648]: fetch-offline: fetch-offline passed Feb 9 10:08:17.843739 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 10:08:17.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.842617 ignition[648]: Ignition finished successfully Feb 9 10:08:17.845243 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 10:08:17.845906 systemd[1]: Starting ignition-kargs.service... Feb 9 10:08:17.854411 ignition[762]: Ignition 2.14.0 Feb 9 10:08:17.854420 ignition[762]: Stage: kargs Feb 9 10:08:17.854506 ignition[762]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:08:17.854516 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:08:17.855716 ignition[762]: kargs: kargs passed Feb 9 10:08:17.855760 ignition[762]: Ignition finished successfully Feb 9 10:08:17.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.858448 systemd[1]: Finished ignition-kargs.service. Feb 9 10:08:17.860347 systemd[1]: Starting ignition-disks.service... Feb 9 10:08:17.866415 ignition[768]: Ignition 2.14.0 Feb 9 10:08:17.866432 ignition[768]: Stage: disks Feb 9 10:08:17.866512 ignition[768]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:08:17.866522 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:08:17.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.868422 systemd[1]: Finished ignition-disks.service. Feb 9 10:08:17.867600 ignition[768]: disks: disks passed Feb 9 10:08:17.869120 systemd[1]: Reached target initrd-root-device.target. Feb 9 10:08:17.867638 ignition[768]: Ignition finished successfully Feb 9 10:08:17.870444 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:08:17.871550 systemd[1]: Reached target local-fs.target. Feb 9 10:08:17.872642 systemd[1]: Reached target sysinit.target. Feb 9 10:08:17.873808 systemd[1]: Reached target basic.target. Feb 9 10:08:17.875575 systemd[1]: Starting systemd-fsck-root.service... Feb 9 10:08:17.886039 systemd-fsck[776]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 10:08:17.889201 systemd[1]: Finished systemd-fsck-root.service. Feb 9 10:08:17.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.890649 systemd[1]: Mounting sysroot.mount... Feb 9 10:08:17.896468 systemd[1]: Mounted sysroot.mount. Feb 9 10:08:17.897649 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 10:08:17.897209 systemd[1]: Reached target initrd-root-fs.target. Feb 9 10:08:17.899863 systemd[1]: Mounting sysroot-usr.mount... Feb 9 10:08:17.900682 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 10:08:17.900724 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 10:08:17.900747 systemd[1]: Reached target ignition-diskful.target. Feb 9 10:08:17.902467 systemd[1]: Mounted sysroot-usr.mount. Feb 9 10:08:17.904284 systemd[1]: Starting initrd-setup-root.service... Feb 9 10:08:17.908316 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 10:08:17.911703 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Feb 9 10:08:17.915724 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 10:08:17.919431 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 10:08:17.944201 systemd[1]: Finished initrd-setup-root.service. Feb 9 10:08:17.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.945632 systemd[1]: Starting ignition-mount.service... Feb 9 10:08:17.946936 systemd[1]: Starting sysroot-boot.service... Feb 9 10:08:17.950945 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 10:08:17.958540 ignition[829]: INFO : Ignition 2.14.0 Feb 9 10:08:17.958540 ignition[829]: INFO : Stage: mount Feb 9 10:08:17.960751 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:08:17.960751 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:08:17.960751 ignition[829]: INFO : mount: mount passed Feb 9 10:08:17.960751 ignition[829]: INFO : Ignition finished successfully Feb 9 10:08:17.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:17.960892 systemd[1]: Finished ignition-mount.service. Feb 9 10:08:17.965902 systemd[1]: Finished sysroot-boot.service. Feb 9 10:08:17.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:18.556460 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 10:08:18.561829 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) Feb 9 10:08:18.563825 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:08:18.563849 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:08:18.563859 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:08:18.566570 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 10:08:18.568158 systemd[1]: Starting ignition-files.service... Feb 9 10:08:18.581645 ignition[858]: INFO : Ignition 2.14.0 Feb 9 10:08:18.581645 ignition[858]: INFO : Stage: files Feb 9 10:08:18.582782 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:08:18.582782 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:08:18.584377 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Feb 9 10:08:18.589183 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 10:08:18.589183 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 10:08:18.592200 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 10:08:18.593176 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 10:08:18.594280 unknown[858]: wrote ssh authorized keys file for user: core Feb 9 10:08:18.595116 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 10:08:18.595116 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 10:08:18.595116 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 10:08:18.616674 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 10:08:18.686078 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 10:08:18.687574 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:08:18.687574 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 10:08:18.985847 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 10:08:19.109328 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 10:08:19.111381 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:08:19.111381 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:08:19.111381 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 10:08:19.339676 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 10:08:19.581324 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 10:08:19.583563 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:08:19.583563 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:08:19.583563 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:08:19.583563 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 10:08:19.583563 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 9 10:08:19.627594 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 10:08:19.731964 systemd-networkd[740]: eth0: Gained IPv6LL Feb 9 10:08:20.060006 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 9 10:08:20.060006 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 10:08:20.060006 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:08:20.060006 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 10:08:20.082619 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 10:08:20.672262 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 10:08:20.672262 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:08:20.672262 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:08:20.672262 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 10:08:20.694076 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 10:08:21.163080 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 10:08:21.163080 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:08:21.167411 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 10:08:21.167411 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 10:08:21.368829 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 10:08:21.466671 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 10:08:21.466671 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:08:21.469468 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:08:21.469468 ignition[858]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 10:08:21.492425 ignition[858]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:08:21.521038 ignition[858]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:08:21.523092 ignition[858]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 10:08:21.523092 ignition[858]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:08:21.523092 ignition[858]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:08:21.523092 ignition[858]: INFO : files: files passed Feb 9 10:08:21.523092 ignition[858]: INFO : Ignition finished successfully Feb 9 10:08:21.532553 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 10:08:21.532574 kernel: audit: type=1130 audit(1707473301.524:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.523135 systemd[1]: Finished ignition-files.service. Feb 9 10:08:21.526345 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 10:08:21.538035 kernel: audit: type=1130 audit(1707473301.534:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.538052 kernel: audit: type=1131 audit(1707473301.534:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.529838 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 10:08:21.541926 kernel: audit: type=1130 audit(1707473301.538:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.542032 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 10:08:21.530453 systemd[1]: Starting ignition-quench.service... Feb 9 10:08:21.544831 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 10:08:21.533225 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 10:08:21.533305 systemd[1]: Finished ignition-quench.service. Feb 9 10:08:21.534266 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 10:08:21.539056 systemd[1]: Reached target ignition-complete.target. Feb 9 10:08:21.543328 systemd[1]: Starting initrd-parse-etc.service... Feb 9 10:08:21.555543 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 10:08:21.555632 systemd[1]: Finished initrd-parse-etc.service. Feb 9 10:08:21.560843 kernel: audit: type=1130 audit(1707473301.556:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.560860 kernel: audit: type=1131 audit(1707473301.556:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.557140 systemd[1]: Reached target initrd-fs.target. Feb 9 10:08:21.561551 systemd[1]: Reached target initrd.target. Feb 9 10:08:21.562721 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 10:08:21.563436 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 10:08:21.573202 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 10:08:21.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.574676 systemd[1]: Starting initrd-cleanup.service... Feb 9 10:08:21.577331 kernel: audit: type=1130 audit(1707473301.573:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.582210 systemd[1]: Stopped target nss-lookup.target. Feb 9 10:08:21.583073 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 10:08:21.584311 systemd[1]: Stopped target timers.target. Feb 9 10:08:21.585484 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 10:08:21.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.585584 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 10:08:21.590173 kernel: audit: type=1131 audit(1707473301.585:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.586730 systemd[1]: Stopped target initrd.target. Feb 9 10:08:21.589738 systemd[1]: Stopped target basic.target. Feb 9 10:08:21.590910 systemd[1]: Stopped target ignition-complete.target. Feb 9 10:08:21.592098 systemd[1]: Stopped target ignition-diskful.target. Feb 9 10:08:21.593253 systemd[1]: Stopped target initrd-root-device.target. Feb 9 10:08:21.594563 systemd[1]: Stopped target remote-fs.target. Feb 9 10:08:21.595737 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 10:08:21.597049 systemd[1]: Stopped target sysinit.target. Feb 9 10:08:21.598162 systemd[1]: Stopped target local-fs.target. Feb 9 10:08:21.599303 systemd[1]: Stopped target local-fs-pre.target. Feb 9 10:08:21.600442 systemd[1]: Stopped target swap.target. Feb 9 10:08:21.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.601507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 10:08:21.606060 kernel: audit: type=1131 audit(1707473301.601:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.601607 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 10:08:21.608835 kernel: audit: type=1131 audit(1707473301.606:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.602742 systemd[1]: Stopped target cryptsetup.target. Feb 9 10:08:21.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.605533 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 10:08:21.605630 systemd[1]: Stopped dracut-initqueue.service. Feb 9 10:08:21.606886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 10:08:21.606983 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 10:08:21.609911 systemd[1]: Stopped target paths.target. Feb 9 10:08:21.610927 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 10:08:21.615836 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 10:08:21.616746 systemd[1]: Stopped target slices.target. Feb 9 10:08:21.617931 systemd[1]: Stopped target sockets.target. Feb 9 10:08:21.619024 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 10:08:21.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.619129 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 10:08:21.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.620337 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 10:08:21.620426 systemd[1]: Stopped ignition-files.service. Feb 9 10:08:21.623987 iscsid[747]: iscsid shutting down. Feb 9 10:08:21.622381 systemd[1]: Stopping ignition-mount.service... Feb 9 10:08:21.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.623658 systemd[1]: Stopping iscsid.service... Feb 9 10:08:21.624472 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 10:08:21.624581 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 10:08:21.629634 ignition[899]: INFO : Ignition 2.14.0 Feb 9 10:08:21.629634 ignition[899]: INFO : Stage: umount Feb 9 10:08:21.629634 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:08:21.629634 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:08:21.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.626394 systemd[1]: Stopping sysroot-boot.service... Feb 9 10:08:21.636334 ignition[899]: INFO : umount: umount passed Feb 9 10:08:21.636334 ignition[899]: INFO : Ignition finished successfully Feb 9 10:08:21.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.630221 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 10:08:21.630360 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 10:08:21.631641 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 10:08:21.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.631736 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 10:08:21.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.634235 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 10:08:21.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.634316 systemd[1]: Stopped iscsid.service. Feb 9 10:08:21.636009 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 10:08:21.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.636123 systemd[1]: Stopped ignition-mount.service. Feb 9 10:08:21.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.639624 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 10:08:21.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.639881 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 10:08:21.639911 systemd[1]: Closed iscsid.socket. Feb 9 10:08:21.640549 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 10:08:21.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.640588 systemd[1]: Stopped ignition-disks.service. Feb 9 10:08:21.641945 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 10:08:21.641986 systemd[1]: Stopped ignition-kargs.service. Feb 9 10:08:21.643399 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 10:08:21.643440 systemd[1]: Stopped ignition-setup.service. Feb 9 10:08:21.644745 systemd[1]: Stopping iscsiuio.service... Feb 9 10:08:21.646623 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 10:08:21.646701 systemd[1]: Finished initrd-cleanup.service. Feb 9 10:08:21.647742 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 10:08:21.647842 systemd[1]: Stopped iscsiuio.service. Feb 9 10:08:21.648967 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 10:08:21.649035 systemd[1]: Stopped sysroot-boot.service. Feb 9 10:08:21.650015 systemd[1]: Stopped target network.target. Feb 9 10:08:21.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.651099 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 10:08:21.651129 systemd[1]: Closed iscsiuio.socket. Feb 9 10:08:21.652172 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 10:08:21.652211 systemd[1]: Stopped initrd-setup-root.service. Feb 9 10:08:21.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.653321 systemd[1]: Stopping systemd-networkd.service... Feb 9 10:08:21.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.654700 systemd[1]: Stopping systemd-resolved.service... Feb 9 10:08:21.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.661350 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 9 10:08:21.671000 audit: BPF prog-id=9 op=UNLOAD Feb 9 10:08:21.662383 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 10:08:21.662472 systemd[1]: Stopped systemd-networkd.service. Feb 9 10:08:21.663925 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 10:08:21.663952 systemd[1]: Closed systemd-networkd.socket. Feb 9 10:08:21.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.665348 systemd[1]: Stopping network-cleanup.service... Feb 9 10:08:21.666385 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 10:08:21.666440 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 10:08:21.667854 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:08:21.680000 audit: BPF prog-id=6 op=UNLOAD Feb 9 10:08:21.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.667893 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:08:21.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.669587 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 10:08:21.669624 systemd[1]: Stopped systemd-modules-load.service. Feb 9 10:08:21.670706 systemd[1]: Stopping systemd-udevd.service... Feb 9 10:08:21.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.675575 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 10:08:21.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.675986 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 10:08:21.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.676074 systemd[1]: Stopped systemd-resolved.service. Feb 9 10:08:21.679847 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 10:08:21.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.679963 systemd[1]: Stopped systemd-udevd.service. Feb 9 10:08:21.681588 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 10:08:21.681657 systemd[1]: Stopped network-cleanup.service. Feb 9 10:08:21.682885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 10:08:21.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.682918 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 10:08:21.683897 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 10:08:21.683923 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 10:08:21.685207 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 10:08:21.685242 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 10:08:21.686439 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 10:08:21.686472 systemd[1]: Stopped dracut-cmdline.service. Feb 9 10:08:21.687595 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 10:08:21.687627 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 10:08:21.690010 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 10:08:21.690680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 10:08:21.690728 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 10:08:21.694764 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 10:08:21.694859 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 10:08:21.695959 systemd[1]: Reached target initrd-switch-root.target. Feb 9 10:08:21.698040 systemd[1]: Starting initrd-switch-root.service... Feb 9 10:08:21.703788 systemd[1]: Switching root. Feb 9 10:08:21.722076 systemd-journald[290]: Journal stopped Feb 9 10:08:23.767601 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 10:08:23.767658 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 10:08:23.767702 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 10:08:23.767716 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 10:08:23.767726 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 10:08:23.767738 kernel: SELinux: policy capability open_perms=1 Feb 9 10:08:23.767759 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 10:08:23.767768 kernel: SELinux: policy capability always_check_network=0 Feb 9 10:08:23.767785 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 10:08:23.767796 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 10:08:23.767832 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 10:08:23.767843 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 10:08:23.767856 systemd[1]: Successfully loaded SELinux policy in 31.483ms. Feb 9 10:08:23.767873 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.632ms. Feb 9 10:08:23.767886 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:08:23.767903 systemd[1]: Detected virtualization kvm. Feb 9 10:08:23.767917 systemd[1]: Detected architecture arm64. Feb 9 10:08:23.767927 systemd[1]: Detected first boot. Feb 9 10:08:23.767938 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:08:23.767948 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 10:08:23.767957 systemd[1]: Populated /etc with preset unit settings. Feb 9 10:08:23.767969 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:08:23.767982 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:08:23.767993 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:08:23.768005 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 10:08:23.768016 systemd[1]: Stopped initrd-switch-root.service. Feb 9 10:08:23.768026 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 10:08:23.768036 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 10:08:23.768048 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 10:08:23.768058 systemd[1]: Created slice system-getty.slice. Feb 9 10:08:23.768068 systemd[1]: Created slice system-modprobe.slice. Feb 9 10:08:23.768079 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 10:08:23.768089 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 10:08:23.768099 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 10:08:23.768109 systemd[1]: Created slice user.slice. Feb 9 10:08:23.768120 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:08:23.768130 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 10:08:23.768141 systemd[1]: Set up automount boot.automount. Feb 9 10:08:23.768151 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 10:08:23.768161 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 10:08:23.768173 systemd[1]: Stopped target initrd-fs.target. Feb 9 10:08:23.768183 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 10:08:23.768194 systemd[1]: Reached target integritysetup.target. Feb 9 10:08:23.768204 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:08:23.768214 systemd[1]: Reached target remote-fs.target. Feb 9 10:08:23.768226 systemd[1]: Reached target slices.target. Feb 9 10:08:23.768236 systemd[1]: Reached target swap.target. Feb 9 10:08:23.768248 systemd[1]: Reached target torcx.target. Feb 9 10:08:23.768259 systemd[1]: Reached target veritysetup.target. Feb 9 10:08:23.768269 systemd[1]: Listening on systemd-coredump.socket. Feb 9 10:08:23.768279 systemd[1]: Listening on systemd-initctl.socket. Feb 9 10:08:23.768289 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:08:23.768300 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:08:23.768310 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:08:23.768321 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 10:08:23.768332 systemd[1]: Mounting dev-hugepages.mount... Feb 9 10:08:23.768342 systemd[1]: Mounting dev-mqueue.mount... Feb 9 10:08:23.768352 systemd[1]: Mounting media.mount... Feb 9 10:08:23.768367 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 10:08:23.768377 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 10:08:23.768388 systemd[1]: Mounting tmp.mount... Feb 9 10:08:23.768398 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 10:08:23.768408 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 10:08:23.768418 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:08:23.768430 systemd[1]: Starting modprobe@configfs.service... Feb 9 10:08:23.768440 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 10:08:23.768450 systemd[1]: Starting modprobe@drm.service... Feb 9 10:08:23.768461 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 10:08:23.768471 systemd[1]: Starting modprobe@fuse.service... Feb 9 10:08:23.768481 systemd[1]: Starting modprobe@loop.service... Feb 9 10:08:23.768492 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 10:08:23.768502 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 10:08:23.768512 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 10:08:23.768523 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 10:08:23.768533 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 10:08:23.768543 systemd[1]: Stopped systemd-journald.service. Feb 9 10:08:23.768553 kernel: loop: module loaded Feb 9 10:08:23.768565 systemd[1]: Starting systemd-journald.service... Feb 9 10:08:23.768577 kernel: fuse: init (API version 7.34) Feb 9 10:08:23.768588 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:08:23.768598 systemd[1]: Starting systemd-network-generator.service... Feb 9 10:08:23.768608 systemd[1]: Starting systemd-remount-fs.service... Feb 9 10:08:23.768619 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:08:23.768629 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 10:08:23.768641 systemd[1]: Stopped verity-setup.service. Feb 9 10:08:23.768651 systemd[1]: Mounted dev-hugepages.mount. Feb 9 10:08:23.768661 systemd[1]: Mounted dev-mqueue.mount. Feb 9 10:08:23.768673 systemd-journald[1001]: Journal started Feb 9 10:08:23.768713 systemd-journald[1001]: Runtime Journal (/run/log/journal/362411a1c9bf47d2b19aa3d17147a7b0) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:08:21.781000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 10:08:21.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:08:21.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:08:21.955000 audit: BPF prog-id=10 op=LOAD Feb 9 10:08:21.955000 audit: BPF prog-id=10 op=UNLOAD Feb 9 10:08:21.955000 audit: BPF prog-id=11 op=LOAD Feb 9 10:08:21.955000 audit: BPF prog-id=11 op=UNLOAD Feb 9 10:08:21.995000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 10:08:21.995000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:08:21.995000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:08:21.995000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 10:08:21.995000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:08:21.995000 audit: CWD cwd="/" Feb 9 10:08:21.995000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:08:21.995000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:08:21.995000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:08:23.661000 audit: BPF prog-id=12 op=LOAD Feb 9 10:08:23.661000 audit: BPF prog-id=3 op=UNLOAD Feb 9 10:08:23.661000 audit: BPF prog-id=13 op=LOAD Feb 9 10:08:23.661000 audit: BPF prog-id=14 op=LOAD Feb 9 10:08:23.661000 audit: BPF prog-id=4 op=UNLOAD Feb 9 10:08:23.661000 audit: BPF prog-id=5 op=UNLOAD Feb 9 10:08:23.663000 audit: BPF prog-id=15 op=LOAD Feb 9 10:08:23.663000 audit: BPF prog-id=12 op=UNLOAD Feb 9 10:08:23.663000 audit: BPF prog-id=16 op=LOAD Feb 9 10:08:23.663000 audit: BPF prog-id=17 op=LOAD Feb 9 10:08:23.663000 audit: BPF prog-id=13 op=UNLOAD Feb 9 10:08:23.663000 audit: BPF prog-id=14 op=UNLOAD Feb 9 10:08:23.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.677000 audit: BPF prog-id=15 op=UNLOAD Feb 9 10:08:23.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.749000 audit: BPF prog-id=18 op=LOAD Feb 9 10:08:23.749000 audit: BPF prog-id=19 op=LOAD Feb 9 10:08:23.749000 audit: BPF prog-id=20 op=LOAD Feb 9 10:08:23.749000 audit: BPF prog-id=16 op=UNLOAD Feb 9 10:08:23.749000 audit: BPF prog-id=17 op=UNLOAD Feb 9 10:08:23.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.765000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 10:08:23.765000 audit[1001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffbcd14e0 a2=4000 a3=1 items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:08:23.765000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 10:08:23.661047 systemd[1]: Queued start job for default target multi-user.target. Feb 9 10:08:21.993892 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:08:23.661059 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 10:08:21.994388 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:08:23.664443 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 10:08:21.994409 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:08:21.994439 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 10:08:21.994448 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 10:08:21.994477 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 10:08:21.994488 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 10:08:21.994705 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 10:08:21.994739 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:08:21.994751 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:08:23.771326 systemd[1]: Started systemd-journald.service. Feb 9 10:08:21.995379 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 10:08:21.995417 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 10:08:21.995435 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 10:08:23.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:21.995451 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 10:08:21.995467 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 10:08:21.995481 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 10:08:23.418492 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:08:23.418760 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:08:23.771922 systemd[1]: Mounted media.mount. Feb 9 10:08:23.418898 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:08:23.419056 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:08:23.419104 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 10:08:23.419159 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T10:08:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 10:08:23.772868 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 10:08:23.773724 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 10:08:23.774694 systemd[1]: Mounted tmp.mount. Feb 9 10:08:23.775753 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:08:23.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.777017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 10:08:23.777203 systemd[1]: Finished modprobe@configfs.service. Feb 9 10:08:23.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.778262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 10:08:23.778404 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 10:08:23.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.779510 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 10:08:23.779676 systemd[1]: Finished modprobe@drm.service. Feb 9 10:08:23.780717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 10:08:23.780900 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 10:08:23.782116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 10:08:23.782277 systemd[1]: Finished modprobe@fuse.service. Feb 9 10:08:23.783475 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 10:08:23.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.784506 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 10:08:23.784652 systemd[1]: Finished modprobe@loop.service. Feb 9 10:08:23.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.785920 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:08:23.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.787055 systemd[1]: Finished systemd-network-generator.service. Feb 9 10:08:23.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.788425 systemd[1]: Finished systemd-remount-fs.service. Feb 9 10:08:23.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.789888 systemd[1]: Reached target network-pre.target. Feb 9 10:08:23.792014 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 10:08:23.793793 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 10:08:23.794545 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 10:08:23.797471 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 10:08:23.799217 systemd[1]: Starting systemd-journal-flush.service... Feb 9 10:08:23.800145 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 10:08:23.801140 systemd[1]: Starting systemd-random-seed.service... Feb 9 10:08:23.801902 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 10:08:23.802976 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:08:23.804879 systemd[1]: Starting systemd-sysusers.service... Feb 9 10:08:23.808173 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 10:08:23.809190 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 10:08:23.814176 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:08:23.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.816454 systemd[1]: Starting systemd-udev-settle.service... Feb 9 10:08:23.822248 systemd-journald[1001]: Time spent on flushing to /var/log/journal/362411a1c9bf47d2b19aa3d17147a7b0 is 14.717ms for 1034 entries. Feb 9 10:08:23.822248 systemd-journald[1001]: System Journal (/var/log/journal/362411a1c9bf47d2b19aa3d17147a7b0) is 8.0M, max 195.6M, 187.6M free. Feb 9 10:08:23.845460 systemd-journald[1001]: Received client request to flush runtime journal. Feb 9 10:08:23.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:23.822943 systemd[1]: Finished systemd-random-seed.service. Feb 9 10:08:23.845916 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 10:08:23.824507 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:08:23.827206 systemd[1]: Reached target first-boot-complete.target. Feb 9 10:08:23.837260 systemd[1]: Finished systemd-sysusers.service. Feb 9 10:08:23.846294 systemd[1]: Finished systemd-journal-flush.service. Feb 9 10:08:23.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.180905 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 10:08:24.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.181000 audit: BPF prog-id=21 op=LOAD Feb 9 10:08:24.181000 audit: BPF prog-id=22 op=LOAD Feb 9 10:08:24.181000 audit: BPF prog-id=7 op=UNLOAD Feb 9 10:08:24.181000 audit: BPF prog-id=8 op=UNLOAD Feb 9 10:08:24.183151 systemd[1]: Starting systemd-udevd.service... Feb 9 10:08:24.201871 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Feb 9 10:08:24.214349 systemd[1]: Started systemd-udevd.service. Feb 9 10:08:24.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.218000 audit: BPF prog-id=23 op=LOAD Feb 9 10:08:24.219354 systemd[1]: Starting systemd-networkd.service... Feb 9 10:08:24.226000 audit: BPF prog-id=24 op=LOAD Feb 9 10:08:24.226000 audit: BPF prog-id=25 op=LOAD Feb 9 10:08:24.226000 audit: BPF prog-id=26 op=LOAD Feb 9 10:08:24.227978 systemd[1]: Starting systemd-userdbd.service... Feb 9 10:08:24.244855 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 10:08:24.259266 systemd[1]: Started systemd-userdbd.service. Feb 9 10:08:24.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.302649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:08:24.322215 systemd[1]: Finished systemd-udev-settle.service. Feb 9 10:08:24.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.324240 systemd[1]: Starting lvm2-activation-early.service... Feb 9 10:08:24.328764 systemd-networkd[1052]: lo: Link UP Feb 9 10:08:24.329127 systemd-networkd[1052]: lo: Gained carrier Feb 9 10:08:24.329565 systemd-networkd[1052]: Enumeration completed Feb 9 10:08:24.329771 systemd[1]: Started systemd-networkd.service. Feb 9 10:08:24.329902 systemd-networkd[1052]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:08:24.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.338698 systemd-networkd[1052]: eth0: Link UP Feb 9 10:08:24.338829 systemd-networkd[1052]: eth0: Gained carrier Feb 9 10:08:24.340061 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:08:24.360948 systemd-networkd[1052]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:08:24.362601 systemd[1]: Finished lvm2-activation-early.service. Feb 9 10:08:24.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.363442 systemd[1]: Reached target cryptsetup.target. Feb 9 10:08:24.365240 systemd[1]: Starting lvm2-activation.service... Feb 9 10:08:24.368734 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:08:24.392761 systemd[1]: Finished lvm2-activation.service. Feb 9 10:08:24.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.393552 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:08:24.394208 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 10:08:24.394236 systemd[1]: Reached target local-fs.target. Feb 9 10:08:24.394787 systemd[1]: Reached target machines.target. Feb 9 10:08:24.396646 systemd[1]: Starting ldconfig.service... Feb 9 10:08:24.397563 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 10:08:24.397618 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:08:24.398698 systemd[1]: Starting systemd-boot-update.service... Feb 9 10:08:24.400677 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 10:08:24.403243 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 10:08:24.404289 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:08:24.404346 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:08:24.405525 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 10:08:24.407549 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Feb 9 10:08:24.408902 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 10:08:24.415900 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 10:08:24.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.489347 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 10:08:24.491679 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 10:08:24.495713 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 10:08:24.504753 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Feb 9 10:08:24.504753 systemd-fsck[1080]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 10:08:24.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.507761 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 10:08:24.510663 systemd[1]: Mounting boot.mount... Feb 9 10:08:24.522927 systemd[1]: Mounted boot.mount. Feb 9 10:08:24.564344 systemd[1]: Finished systemd-boot-update.service. Feb 9 10:08:24.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.567588 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 10:08:24.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.631474 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 10:08:24.635638 systemd[1]: Finished ldconfig.service. Feb 9 10:08:24.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.638135 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 10:08:24.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.640139 systemd[1]: Starting audit-rules.service... Feb 9 10:08:24.641933 systemd[1]: Starting clean-ca-certificates.service... Feb 9 10:08:24.643967 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 10:08:24.647000 audit: BPF prog-id=27 op=LOAD Feb 9 10:08:24.650368 systemd[1]: Starting systemd-resolved.service... Feb 9 10:08:24.651000 audit: BPF prog-id=28 op=LOAD Feb 9 10:08:24.653468 systemd[1]: Starting systemd-timesyncd.service... Feb 9 10:08:24.655449 systemd[1]: Starting systemd-update-utmp.service... Feb 9 10:08:24.656920 systemd[1]: Finished clean-ca-certificates.service. Feb 9 10:08:24.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.658128 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 10:08:24.661000 audit[1094]: SYSTEM_BOOT pid=1094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.667043 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 10:08:24.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.669801 systemd[1]: Starting systemd-update-done.service... Feb 9 10:08:24.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.670998 systemd[1]: Finished systemd-update-utmp.service. Feb 9 10:08:24.676483 systemd[1]: Finished systemd-update-done.service. Feb 9 10:08:24.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.706133 systemd-timesyncd[1093]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 10:08:24.706197 systemd-timesyncd[1093]: Initial clock synchronization to Fri 2024-02-09 10:08:24.446547 UTC. Feb 9 10:08:24.707871 systemd[1]: Started systemd-timesyncd.service. Feb 9 10:08:24.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:08:24.708856 systemd[1]: Reached target time-set.target. Feb 9 10:08:24.708000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 10:08:24.708000 audit[1104]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc70c07e0 a2=420 a3=0 items=0 ppid=1083 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:08:24.708000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 10:08:24.709857 augenrules[1104]: No rules Feb 9 10:08:24.710782 systemd[1]: Finished audit-rules.service. Feb 9 10:08:24.712678 systemd-resolved[1087]: Positive Trust Anchors: Feb 9 10:08:24.712689 systemd-resolved[1087]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:08:24.712716 systemd-resolved[1087]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:08:24.722513 systemd-resolved[1087]: Defaulting to hostname 'linux'. Feb 9 10:08:24.723959 systemd[1]: Started systemd-resolved.service. Feb 9 10:08:24.724623 systemd[1]: Reached target network.target. Feb 9 10:08:24.725200 systemd[1]: Reached target nss-lookup.target. Feb 9 10:08:24.725757 systemd[1]: Reached target sysinit.target. Feb 9 10:08:24.726408 systemd[1]: Started motdgen.path. Feb 9 10:08:24.726957 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 10:08:24.727904 systemd[1]: Started logrotate.timer. Feb 9 10:08:24.728518 systemd[1]: Started mdadm.timer. Feb 9 10:08:24.729118 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 10:08:24.729723 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 10:08:24.729749 systemd[1]: Reached target paths.target. Feb 9 10:08:24.730320 systemd[1]: Reached target timers.target. Feb 9 10:08:24.731198 systemd[1]: Listening on dbus.socket. Feb 9 10:08:24.732758 systemd[1]: Starting docker.socket... Feb 9 10:08:24.735703 systemd[1]: Listening on sshd.socket. Feb 9 10:08:24.736392 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:08:24.736822 systemd[1]: Listening on docker.socket. Feb 9 10:08:24.737650 systemd[1]: Reached target sockets.target. Feb 9 10:08:24.738466 systemd[1]: Reached target basic.target. Feb 9 10:08:24.739262 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:08:24.739292 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:08:24.740240 systemd[1]: Starting containerd.service... Feb 9 10:08:24.742107 systemd[1]: Starting dbus.service... Feb 9 10:08:24.743804 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 10:08:24.746037 systemd[1]: Starting extend-filesystems.service... Feb 9 10:08:24.746720 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 10:08:24.747956 systemd[1]: Starting motdgen.service... Feb 9 10:08:24.749542 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 10:08:24.753773 systemd[1]: Starting prepare-critools.service... Feb 9 10:08:24.755876 jq[1114]: false Feb 9 10:08:24.755648 systemd[1]: Starting prepare-helm.service... Feb 9 10:08:24.757353 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 10:08:24.759361 systemd[1]: Starting sshd-keygen.service... Feb 9 10:08:24.762126 systemd[1]: Starting systemd-logind.service... Feb 9 10:08:24.762896 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:08:24.762970 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 10:08:24.763387 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 10:08:24.764216 systemd[1]: Starting update-engine.service... Feb 9 10:08:24.766679 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 10:08:24.769278 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 10:08:24.770735 jq[1134]: true Feb 9 10:08:24.770973 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 10:08:24.771154 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 10:08:24.773540 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 10:08:24.773703 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 10:08:24.780805 jq[1140]: true Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda1 Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda2 Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda3 Feb 9 10:08:24.786226 extend-filesystems[1115]: Found usr Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda4 Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda6 Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda7 Feb 9 10:08:24.786226 extend-filesystems[1115]: Found vda9 Feb 9 10:08:24.786226 extend-filesystems[1115]: Checking size of /dev/vda9 Feb 9 10:08:24.804300 systemd[1]: Started dbus.service. Feb 9 10:08:24.804115 dbus-daemon[1113]: [system] SELinux support is enabled Feb 9 10:08:24.807640 tar[1138]: crictl Feb 9 10:08:24.807789 tar[1137]: ./ Feb 9 10:08:24.807789 tar[1137]: ./loopback Feb 9 10:08:24.807968 tar[1139]: linux-arm64/helm Feb 9 10:08:24.807387 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 10:08:24.807546 systemd[1]: Finished motdgen.service. Feb 9 10:08:24.808540 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 10:08:24.808576 systemd[1]: Reached target system-config.target. Feb 9 10:08:24.809494 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 10:08:24.809513 systemd[1]: Reached target user-config.target. Feb 9 10:08:24.838208 extend-filesystems[1115]: Resized partition /dev/vda9 Feb 9 10:08:24.841964 extend-filesystems[1168]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 10:08:24.863072 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 10:08:24.864343 systemd-logind[1130]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 10:08:24.865000 systemd-logind[1130]: New seat seat0. Feb 9 10:08:24.867642 systemd[1]: Started systemd-logind.service. Feb 9 10:08:24.887108 update_engine[1133]: I0209 10:08:24.886878 1133 main.cc:92] Flatcar Update Engine starting Feb 9 10:08:24.891213 update_engine[1133]: I0209 10:08:24.889681 1133 update_check_scheduler.cc:74] Next update check in 11m7s Feb 9 10:08:24.889799 systemd[1]: Started update-engine.service. Feb 9 10:08:24.892624 systemd[1]: Started locksmithd.service. Feb 9 10:08:24.900204 bash[1165]: Updated "/home/core/.ssh/authorized_keys" Feb 9 10:08:24.901521 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 10:08:24.901871 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 10:08:24.923843 extend-filesystems[1168]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 10:08:24.923843 extend-filesystems[1168]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 10:08:24.923843 extend-filesystems[1168]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 10:08:24.922503 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 10:08:24.926497 extend-filesystems[1115]: Resized filesystem in /dev/vda9 Feb 9 10:08:24.922662 systemd[1]: Finished extend-filesystems.service. Feb 9 10:08:24.931189 tar[1137]: ./bandwidth Feb 9 10:08:24.953379 env[1141]: time="2024-02-09T10:08:24.953327120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 10:08:24.967148 tar[1137]: ./ptp Feb 9 10:08:24.975323 env[1141]: time="2024-02-09T10:08:24.975272800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 10:08:24.975443 env[1141]: time="2024-02-09T10:08:24.975420880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:08:24.976754 env[1141]: time="2024-02-09T10:08:24.976717520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:08:24.976754 env[1141]: time="2024-02-09T10:08:24.976747880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:08:24.976998 env[1141]: time="2024-02-09T10:08:24.976966480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:08:24.976998 env[1141]: time="2024-02-09T10:08:24.976992280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 10:08:24.977057 env[1141]: time="2024-02-09T10:08:24.977005240Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 10:08:24.977057 env[1141]: time="2024-02-09T10:08:24.977014600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 10:08:24.977108 env[1141]: time="2024-02-09T10:08:24.977093240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:08:24.977379 env[1141]: time="2024-02-09T10:08:24.977351120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:08:24.977501 env[1141]: time="2024-02-09T10:08:24.977476880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:08:24.977501 env[1141]: time="2024-02-09T10:08:24.977496320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 10:08:24.977569 env[1141]: time="2024-02-09T10:08:24.977550440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 10:08:24.977569 env[1141]: time="2024-02-09T10:08:24.977566080Z" level=info msg="metadata content store policy set" policy=shared Feb 9 10:08:24.980616 env[1141]: time="2024-02-09T10:08:24.980587280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 10:08:24.980675 env[1141]: time="2024-02-09T10:08:24.980618200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 10:08:24.980675 env[1141]: time="2024-02-09T10:08:24.980632080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 10:08:24.980675 env[1141]: time="2024-02-09T10:08:24.980662160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.980744 env[1141]: time="2024-02-09T10:08:24.980676600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.980744 env[1141]: time="2024-02-09T10:08:24.980690280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.980744 env[1141]: time="2024-02-09T10:08:24.980703640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.981090 env[1141]: time="2024-02-09T10:08:24.981061600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.981142 env[1141]: time="2024-02-09T10:08:24.981091960Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.981142 env[1141]: time="2024-02-09T10:08:24.981105760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.981142 env[1141]: time="2024-02-09T10:08:24.981118920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.981142 env[1141]: time="2024-02-09T10:08:24.981132600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 10:08:24.981258 env[1141]: time="2024-02-09T10:08:24.981236160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 10:08:24.981333 env[1141]: time="2024-02-09T10:08:24.981316040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 10:08:24.981532 env[1141]: time="2024-02-09T10:08:24.981511720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 10:08:24.981573 env[1141]: time="2024-02-09T10:08:24.981538040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981573 env[1141]: time="2024-02-09T10:08:24.981558440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 10:08:24.981743 env[1141]: time="2024-02-09T10:08:24.981724520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981785 env[1141]: time="2024-02-09T10:08:24.981742880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981785 env[1141]: time="2024-02-09T10:08:24.981755920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981785 env[1141]: time="2024-02-09T10:08:24.981767000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981856 env[1141]: time="2024-02-09T10:08:24.981786840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981856 env[1141]: time="2024-02-09T10:08:24.981799520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981856 env[1141]: time="2024-02-09T10:08:24.981822640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981856 env[1141]: time="2024-02-09T10:08:24.981835760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.981856 env[1141]: time="2024-02-09T10:08:24.981848320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 10:08:24.981986 env[1141]: time="2024-02-09T10:08:24.981964320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.982018 env[1141]: time="2024-02-09T10:08:24.981990160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.982018 env[1141]: time="2024-02-09T10:08:24.982003240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.982018 env[1141]: time="2024-02-09T10:08:24.982013840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 10:08:24.982075 env[1141]: time="2024-02-09T10:08:24.982027040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 10:08:24.982075 env[1141]: time="2024-02-09T10:08:24.982038640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 10:08:24.982075 env[1141]: time="2024-02-09T10:08:24.982054120Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 10:08:24.982134 env[1141]: time="2024-02-09T10:08:24.982085560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 10:08:24.982336 env[1141]: time="2024-02-09T10:08:24.982280200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 10:08:24.983059 env[1141]: time="2024-02-09T10:08:24.982337600Z" level=info msg="Connect containerd service" Feb 9 10:08:24.983059 env[1141]: time="2024-02-09T10:08:24.982369120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 10:08:24.983132 env[1141]: time="2024-02-09T10:08:24.983097840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:08:24.983399 env[1141]: time="2024-02-09T10:08:24.983379200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 10:08:24.983440 env[1141]: time="2024-02-09T10:08:24.983420800Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 10:08:24.983477 env[1141]: time="2024-02-09T10:08:24.983462080Z" level=info msg="containerd successfully booted in 0.030805s" Feb 9 10:08:24.983535 systemd[1]: Started containerd.service. Feb 9 10:08:24.984299 env[1141]: time="2024-02-09T10:08:24.984262440Z" level=info msg="Start subscribing containerd event" Feb 9 10:08:24.984348 env[1141]: time="2024-02-09T10:08:24.984309040Z" level=info msg="Start recovering state" Feb 9 10:08:24.984371 env[1141]: time="2024-02-09T10:08:24.984365440Z" level=info msg="Start event monitor" Feb 9 10:08:24.984404 env[1141]: time="2024-02-09T10:08:24.984383800Z" level=info msg="Start snapshots syncer" Feb 9 10:08:24.984404 env[1141]: time="2024-02-09T10:08:24.984394480Z" level=info msg="Start cni network conf syncer for default" Feb 9 10:08:24.984404 env[1141]: time="2024-02-09T10:08:24.984401520Z" level=info msg="Start streaming server" Feb 9 10:08:25.006417 tar[1137]: ./vlan Feb 9 10:08:25.039359 tar[1137]: ./host-device Feb 9 10:08:25.071690 tar[1137]: ./tuning Feb 9 10:08:25.101059 tar[1137]: ./vrf Feb 9 10:08:25.130889 tar[1137]: ./sbr Feb 9 10:08:25.159828 tar[1137]: ./tap Feb 9 10:08:25.193833 tar[1137]: ./dhcp Feb 9 10:08:25.217286 tar[1139]: linux-arm64/LICENSE Feb 9 10:08:25.217382 tar[1139]: linux-arm64/README.md Feb 9 10:08:25.221730 systemd[1]: Finished prepare-helm.service. Feb 9 10:08:25.228970 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 10:08:25.277151 tar[1137]: ./static Feb 9 10:08:25.301313 tar[1137]: ./firewall Feb 9 10:08:25.337852 tar[1137]: ./macvlan Feb 9 10:08:25.361265 systemd[1]: Finished prepare-critools.service. Feb 9 10:08:25.371635 tar[1137]: ./dummy Feb 9 10:08:25.404005 tar[1137]: ./bridge Feb 9 10:08:25.436221 tar[1137]: ./ipvlan Feb 9 10:08:25.463666 tar[1137]: ./portmap Feb 9 10:08:25.489806 tar[1137]: ./host-local Feb 9 10:08:25.525320 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 10:08:25.747953 systemd-networkd[1052]: eth0: Gained IPv6LL Feb 9 10:08:26.872693 sshd_keygen[1135]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 10:08:26.889536 systemd[1]: Finished sshd-keygen.service. Feb 9 10:08:26.891671 systemd[1]: Starting issuegen.service... Feb 9 10:08:26.895921 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 10:08:26.896050 systemd[1]: Finished issuegen.service. Feb 9 10:08:26.897995 systemd[1]: Starting systemd-user-sessions.service... Feb 9 10:08:26.903274 systemd[1]: Finished systemd-user-sessions.service. Feb 9 10:08:26.905236 systemd[1]: Started getty@tty1.service. Feb 9 10:08:26.906986 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 10:08:26.907950 systemd[1]: Reached target getty.target. Feb 9 10:08:26.908732 systemd[1]: Reached target multi-user.target. Feb 9 10:08:26.910508 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 10:08:26.916142 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 10:08:26.916276 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 10:08:26.917269 systemd[1]: Startup finished in 614ms (kernel) + 6.169s (initrd) + 5.168s (userspace) = 11.953s. Feb 9 10:08:27.770040 systemd[1]: Created slice system-sshd.slice. Feb 9 10:08:27.771100 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:59158.service. Feb 9 10:08:27.817545 sshd[1202]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:08:27.821215 sshd[1202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:08:27.829130 systemd[1]: Created slice user-500.slice. Feb 9 10:08:27.830256 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 10:08:27.832005 systemd-logind[1130]: New session 1 of user core. Feb 9 10:08:27.837604 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 10:08:27.838933 systemd[1]: Starting user@500.service... Feb 9 10:08:27.841421 (systemd)[1205]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:08:27.900046 systemd[1205]: Queued start job for default target default.target. Feb 9 10:08:27.900495 systemd[1205]: Reached target paths.target. Feb 9 10:08:27.900513 systemd[1205]: Reached target sockets.target. Feb 9 10:08:27.900524 systemd[1205]: Reached target timers.target. Feb 9 10:08:27.900533 systemd[1205]: Reached target basic.target. Feb 9 10:08:27.900580 systemd[1205]: Reached target default.target. Feb 9 10:08:27.900603 systemd[1205]: Startup finished in 53ms. Feb 9 10:08:27.901187 systemd[1]: Started user@500.service. Feb 9 10:08:27.909024 systemd[1]: Started session-1.scope. Feb 9 10:08:27.963354 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:59174.service. Feb 9 10:08:28.015230 sshd[1214]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:08:28.016371 sshd[1214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:08:28.019885 systemd-logind[1130]: New session 2 of user core. Feb 9 10:08:28.020356 systemd[1]: Started session-2.scope. Feb 9 10:08:28.077273 sshd[1214]: pam_unix(sshd:session): session closed for user core Feb 9 10:08:28.080362 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:59174.service: Deactivated successfully. Feb 9 10:08:28.080940 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 10:08:28.081433 systemd-logind[1130]: Session 2 logged out. Waiting for processes to exit. Feb 9 10:08:28.082692 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:59182.service. Feb 9 10:08:28.083359 systemd-logind[1130]: Removed session 2. Feb 9 10:08:28.122612 sshd[1220]: Accepted publickey for core from 10.0.0.1 port 59182 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:08:28.123932 sshd[1220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:08:28.127401 systemd-logind[1130]: New session 3 of user core. Feb 9 10:08:28.127793 systemd[1]: Started session-3.scope. Feb 9 10:08:28.175408 sshd[1220]: pam_unix(sshd:session): session closed for user core Feb 9 10:08:28.178869 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:59188.service. Feb 9 10:08:28.179306 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:59182.service: Deactivated successfully. Feb 9 10:08:28.179961 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 10:08:28.180435 systemd-logind[1130]: Session 3 logged out. Waiting for processes to exit. Feb 9 10:08:28.181224 systemd-logind[1130]: Removed session 3. Feb 9 10:08:28.219060 sshd[1225]: Accepted publickey for core from 10.0.0.1 port 59188 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:08:28.220552 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:08:28.224513 systemd[1]: Started session-4.scope. Feb 9 10:08:28.225005 systemd-logind[1130]: New session 4 of user core. Feb 9 10:08:28.277939 sshd[1225]: pam_unix(sshd:session): session closed for user core Feb 9 10:08:28.280339 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:59188.service: Deactivated successfully. Feb 9 10:08:28.280882 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 10:08:28.281350 systemd-logind[1130]: Session 4 logged out. Waiting for processes to exit. Feb 9 10:08:28.282307 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:59196.service. Feb 9 10:08:28.282941 systemd-logind[1130]: Removed session 4. Feb 9 10:08:28.322206 sshd[1232]: Accepted publickey for core from 10.0.0.1 port 59196 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:08:28.323222 sshd[1232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:08:28.326259 systemd-logind[1130]: New session 5 of user core. Feb 9 10:08:28.327052 systemd[1]: Started session-5.scope. Feb 9 10:08:28.386647 sudo[1236]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 10:08:28.387164 sudo[1236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 10:08:28.944949 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 10:08:29.141280 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 10:08:29.141591 systemd[1]: Reached target network-online.target. Feb 9 10:08:29.142925 systemd[1]: Starting docker.service... Feb 9 10:08:29.226702 env[1254]: time="2024-02-09T10:08:29.226586220Z" level=info msg="Starting up" Feb 9 10:08:29.228477 env[1254]: time="2024-02-09T10:08:29.228448757Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 10:08:29.228477 env[1254]: time="2024-02-09T10:08:29.228468886Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 10:08:29.228619 env[1254]: time="2024-02-09T10:08:29.228491724Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 10:08:29.228619 env[1254]: time="2024-02-09T10:08:29.228501298Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 10:08:29.230246 env[1254]: time="2024-02-09T10:08:29.230225398Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 10:08:29.230246 env[1254]: time="2024-02-09T10:08:29.230243174Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 10:08:29.230347 env[1254]: time="2024-02-09T10:08:29.230255926Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 10:08:29.230347 env[1254]: time="2024-02-09T10:08:29.230264441Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 10:08:29.233764 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4104735241-merged.mount: Deactivated successfully. Feb 9 10:08:29.464826 env[1254]: time="2024-02-09T10:08:29.464772328Z" level=info msg="Loading containers: start." Feb 9 10:08:29.557829 kernel: Initializing XFRM netlink socket Feb 9 10:08:29.581634 env[1254]: time="2024-02-09T10:08:29.581588046Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 10:08:29.629047 systemd-networkd[1052]: docker0: Link UP Feb 9 10:08:29.636864 env[1254]: time="2024-02-09T10:08:29.636825010Z" level=info msg="Loading containers: done." Feb 9 10:08:29.655605 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2347619610-merged.mount: Deactivated successfully. Feb 9 10:08:29.659695 env[1254]: time="2024-02-09T10:08:29.659644397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 10:08:29.659857 env[1254]: time="2024-02-09T10:08:29.659829921Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 10:08:29.659958 env[1254]: time="2024-02-09T10:08:29.659932571Z" level=info msg="Daemon has completed initialization" Feb 9 10:08:29.673962 systemd[1]: Started docker.service. Feb 9 10:08:29.680469 env[1254]: time="2024-02-09T10:08:29.680362673Z" level=info msg="API listen on /run/docker.sock" Feb 9 10:08:29.696012 systemd[1]: Reloading. Feb 9 10:08:29.740912 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2024-02-09T10:08:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:08:29.740939 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2024-02-09T10:08:29Z" level=info msg="torcx already run" Feb 9 10:08:29.791170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:08:29.791187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:08:29.806232 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:08:29.865572 systemd[1]: Started kubelet.service. Feb 9 10:08:29.985451 kubelet[1437]: E0209 10:08:29.985379 1437 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 10:08:29.987348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:08:29.987477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:08:30.236881 env[1141]: time="2024-02-09T10:08:30.236766834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 10:08:30.814202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329098390.mount: Deactivated successfully. Feb 9 10:08:33.678223 env[1141]: time="2024-02-09T10:08:33.678174923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:33.679672 env[1141]: time="2024-02-09T10:08:33.679639296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:33.681404 env[1141]: time="2024-02-09T10:08:33.681357093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:33.682892 env[1141]: time="2024-02-09T10:08:33.682866716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:33.683623 env[1141]: time="2024-02-09T10:08:33.683600406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 9 10:08:33.693589 env[1141]: time="2024-02-09T10:08:33.693561800Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 10:08:35.400890 env[1141]: time="2024-02-09T10:08:35.400736350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:35.402621 env[1141]: time="2024-02-09T10:08:35.402591225Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:35.404207 env[1141]: time="2024-02-09T10:08:35.404176182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:35.405919 env[1141]: time="2024-02-09T10:08:35.405881503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:35.406780 env[1141]: time="2024-02-09T10:08:35.406741461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 9 10:08:35.416073 env[1141]: time="2024-02-09T10:08:35.416047956Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 10:08:38.005530 env[1141]: time="2024-02-09T10:08:38.005471750Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:38.007140 env[1141]: time="2024-02-09T10:08:38.007109573Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:38.009221 env[1141]: time="2024-02-09T10:08:38.009182334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:38.010705 env[1141]: time="2024-02-09T10:08:38.010678968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:38.012588 env[1141]: time="2024-02-09T10:08:38.012541300Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 9 10:08:38.022096 env[1141]: time="2024-02-09T10:08:38.022033241Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 10:08:38.975942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381673857.mount: Deactivated successfully. Feb 9 10:08:39.367910 env[1141]: time="2024-02-09T10:08:39.367781927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.368996 env[1141]: time="2024-02-09T10:08:39.368963506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.370245 env[1141]: time="2024-02-09T10:08:39.370209243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.373247 env[1141]: time="2024-02-09T10:08:39.373217957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.373649 env[1141]: time="2024-02-09T10:08:39.373622045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 10:08:39.382613 env[1141]: time="2024-02-09T10:08:39.382574396Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 10:08:39.827681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243469304.mount: Deactivated successfully. Feb 9 10:08:39.831538 env[1141]: time="2024-02-09T10:08:39.831490959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.833052 env[1141]: time="2024-02-09T10:08:39.833017484Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.835203 env[1141]: time="2024-02-09T10:08:39.835158400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.837134 env[1141]: time="2024-02-09T10:08:39.837099043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:39.837834 env[1141]: time="2024-02-09T10:08:39.837777632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 10:08:39.848312 env[1141]: time="2024-02-09T10:08:39.848277442Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 10:08:40.238297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 10:08:40.238489 systemd[1]: Stopped kubelet.service. Feb 9 10:08:40.240333 systemd[1]: Started kubelet.service. Feb 9 10:08:40.280514 kubelet[1491]: E0209 10:08:40.280465 1491 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 10:08:40.283586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:08:40.283716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:08:40.412149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070876252.mount: Deactivated successfully. Feb 9 10:08:43.016631 env[1141]: time="2024-02-09T10:08:43.016582998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:43.018643 env[1141]: time="2024-02-09T10:08:43.018608641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:43.021906 env[1141]: time="2024-02-09T10:08:43.021869251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:43.023883 env[1141]: time="2024-02-09T10:08:43.023853496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:43.024896 env[1141]: time="2024-02-09T10:08:43.024865041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 9 10:08:43.033870 env[1141]: time="2024-02-09T10:08:43.033838570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 10:08:43.627712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175328224.mount: Deactivated successfully. Feb 9 10:08:44.163285 env[1141]: time="2024-02-09T10:08:44.163242939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:44.165355 env[1141]: time="2024-02-09T10:08:44.165317962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:44.166893 env[1141]: time="2024-02-09T10:08:44.166860274Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:44.168525 env[1141]: time="2024-02-09T10:08:44.168494909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:44.168997 env[1141]: time="2024-02-09T10:08:44.168968690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 10:08:48.475792 systemd[1]: Stopped kubelet.service. Feb 9 10:08:48.489782 systemd[1]: Reloading. Feb 9 10:08:48.534534 /usr/lib/systemd/system-generators/torcx-generator[1603]: time="2024-02-09T10:08:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:08:48.534871 /usr/lib/systemd/system-generators/torcx-generator[1603]: time="2024-02-09T10:08:48Z" level=info msg="torcx already run" Feb 9 10:08:48.589722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:08:48.589740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:08:48.604822 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:08:48.669468 systemd[1]: Started kubelet.service. Feb 9 10:08:48.707445 kubelet[1641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:08:48.707445 kubelet[1641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:08:48.707445 kubelet[1641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:08:48.707771 kubelet[1641]: I0209 10:08:48.707488 1641 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:08:49.200640 kubelet[1641]: I0209 10:08:49.200608 1641 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 10:08:49.200782 kubelet[1641]: I0209 10:08:49.200771 1641 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:08:49.201102 kubelet[1641]: I0209 10:08:49.201083 1641 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 10:08:49.206207 kubelet[1641]: E0209 10:08:49.206177 1641 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.206288 kubelet[1641]: I0209 10:08:49.206232 1641 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:08:49.210276 kubelet[1641]: W0209 10:08:49.210248 1641 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:08:49.211309 kubelet[1641]: I0209 10:08:49.211279 1641 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:08:49.211509 kubelet[1641]: I0209 10:08:49.211488 1641 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:08:49.211883 kubelet[1641]: I0209 10:08:49.211858 1641 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 10:08:49.212025 kubelet[1641]: I0209 10:08:49.212011 1641 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 10:08:49.212086 kubelet[1641]: I0209 10:08:49.212077 1641 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 10:08:49.212361 kubelet[1641]: I0209 10:08:49.212343 1641 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:08:49.212617 kubelet[1641]: I0209 10:08:49.212599 1641 kubelet.go:393] "Attempting to sync node with API server" Feb 9 10:08:49.212617 kubelet[1641]: I0209 10:08:49.212617 1641 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:08:49.212682 kubelet[1641]: I0209 10:08:49.212634 1641 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:08:49.213094 kubelet[1641]: W0209 10:08:49.213051 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.213205 kubelet[1641]: E0209 10:08:49.213194 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.214887 kubelet[1641]: I0209 10:08:49.214855 1641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:08:49.215453 kubelet[1641]: W0209 10:08:49.215417 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.215559 kubelet[1641]: E0209 10:08:49.215547 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.215896 kubelet[1641]: I0209 10:08:49.215876 1641 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:08:49.216279 kubelet[1641]: W0209 10:08:49.216255 1641 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 10:08:49.216966 kubelet[1641]: I0209 10:08:49.216952 1641 server.go:1232] "Started kubelet" Feb 9 10:08:49.217263 kubelet[1641]: I0209 10:08:49.217245 1641 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:08:49.217451 kubelet[1641]: I0209 10:08:49.217432 1641 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:08:49.217668 kubelet[1641]: I0209 10:08:49.217641 1641 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 10:08:49.217993 kubelet[1641]: E0209 10:08:49.217550 1641 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b229f2436dcdb7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 8, 49, 216933303, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 8, 49, 216933303, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.132:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.132:6443: connect: connection refused'(may retry after sleeping) Feb 9 10:08:49.218241 kubelet[1641]: E0209 10:08:49.218214 1641 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:08:49.218241 kubelet[1641]: E0209 10:08:49.218238 1641 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:08:49.218427 kubelet[1641]: I0209 10:08:49.218411 1641 server.go:462] "Adding debug handlers to kubelet server" Feb 9 10:08:49.219845 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 10:08:49.220141 kubelet[1641]: I0209 10:08:49.220078 1641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:08:49.220316 kubelet[1641]: I0209 10:08:49.220302 1641 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 10:08:49.220445 kubelet[1641]: I0209 10:08:49.220432 1641 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 10:08:49.220585 kubelet[1641]: I0209 10:08:49.220575 1641 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 10:08:49.220932 kubelet[1641]: W0209 10:08:49.220898 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.221034 kubelet[1641]: E0209 10:08:49.221021 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.221729 kubelet[1641]: E0209 10:08:49.221704 1641 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Feb 9 10:08:49.234535 kubelet[1641]: I0209 10:08:49.234517 1641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 10:08:49.235460 kubelet[1641]: I0209 10:08:49.235444 1641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 10:08:49.235556 kubelet[1641]: I0209 10:08:49.235545 1641 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 10:08:49.235617 kubelet[1641]: I0209 10:08:49.235607 1641 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 10:08:49.235708 kubelet[1641]: E0209 10:08:49.235698 1641 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:08:49.237177 kubelet[1641]: W0209 10:08:49.237133 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.237335 kubelet[1641]: E0209 10:08:49.237307 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:49.240631 kubelet[1641]: I0209 10:08:49.240599 1641 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:08:49.240631 kubelet[1641]: I0209 10:08:49.240615 1641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:08:49.240631 kubelet[1641]: I0209 10:08:49.240630 1641 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:08:49.242444 kubelet[1641]: I0209 10:08:49.242413 1641 policy_none.go:49] "None policy: Start" Feb 9 10:08:49.242994 kubelet[1641]: I0209 10:08:49.242973 1641 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:08:49.243033 kubelet[1641]: I0209 10:08:49.242997 1641 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:08:49.247950 systemd[1]: Created slice kubepods.slice. Feb 9 10:08:49.251438 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 10:08:49.253609 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 10:08:49.260389 kubelet[1641]: I0209 10:08:49.260365 1641 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:08:49.260600 kubelet[1641]: I0209 10:08:49.260576 1641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:08:49.261891 kubelet[1641]: E0209 10:08:49.261864 1641 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 10:08:49.322405 kubelet[1641]: I0209 10:08:49.322375 1641 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:08:49.322764 kubelet[1641]: E0209 10:08:49.322728 1641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Feb 9 10:08:49.335928 kubelet[1641]: I0209 10:08:49.335883 1641 topology_manager.go:215] "Topology Admit Handler" podUID="463ea25710b7eab370f98da677afb9fd" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 10:08:49.336875 kubelet[1641]: I0209 10:08:49.336848 1641 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 10:08:49.337639 kubelet[1641]: I0209 10:08:49.337598 1641 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 10:08:49.343450 systemd[1]: Created slice kubepods-burstable-pod463ea25710b7eab370f98da677afb9fd.slice. Feb 9 10:08:49.356724 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 9 10:08:49.360135 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 9 10:08:49.421573 kubelet[1641]: I0209 10:08:49.421545 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:49.421703 kubelet[1641]: I0209 10:08:49.421690 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:49.421798 kubelet[1641]: I0209 10:08:49.421787 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/463ea25710b7eab370f98da677afb9fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"463ea25710b7eab370f98da677afb9fd\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:49.421915 kubelet[1641]: I0209 10:08:49.421903 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/463ea25710b7eab370f98da677afb9fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"463ea25710b7eab370f98da677afb9fd\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:49.422009 kubelet[1641]: I0209 10:08:49.421997 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/463ea25710b7eab370f98da677afb9fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"463ea25710b7eab370f98da677afb9fd\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:49.422083 kubelet[1641]: E0209 10:08:49.422052 1641 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Feb 9 10:08:49.422135 kubelet[1641]: I0209 10:08:49.422123 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 10:08:49.422227 kubelet[1641]: I0209 10:08:49.422216 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:49.422298 kubelet[1641]: I0209 10:08:49.422288 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:49.422383 kubelet[1641]: I0209 10:08:49.422372 1641 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:49.523847 kubelet[1641]: I0209 10:08:49.523801 1641 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:08:49.524136 kubelet[1641]: E0209 10:08:49.524117 1641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Feb 9 10:08:49.655709 kubelet[1641]: E0209 10:08:49.655686 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:49.656468 env[1141]: time="2024-02-09T10:08:49.656418363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:463ea25710b7eab370f98da677afb9fd,Namespace:kube-system,Attempt:0,}" Feb 9 10:08:49.658666 kubelet[1641]: E0209 10:08:49.658647 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:49.659140 env[1141]: time="2024-02-09T10:08:49.659075741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 9 10:08:49.661539 kubelet[1641]: E0209 10:08:49.661519 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:49.662163 env[1141]: time="2024-02-09T10:08:49.661960979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 9 10:08:49.822631 kubelet[1641]: E0209 10:08:49.822545 1641 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Feb 9 10:08:49.926034 kubelet[1641]: I0209 10:08:49.926003 1641 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:08:49.926348 kubelet[1641]: E0209 10:08:49.926323 1641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Feb 9 10:08:50.054322 kubelet[1641]: W0209 10:08:50.054236 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:50.054322 kubelet[1641]: E0209 10:08:50.054313 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:50.122961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773011090.mount: Deactivated successfully. Feb 9 10:08:50.127421 env[1141]: time="2024-02-09T10:08:50.127381529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.128771 env[1141]: time="2024-02-09T10:08:50.128737646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.130974 env[1141]: time="2024-02-09T10:08:50.130942783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.132256 env[1141]: time="2024-02-09T10:08:50.132229620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.133633 env[1141]: time="2024-02-09T10:08:50.133606672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.136078 env[1141]: time="2024-02-09T10:08:50.136049057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.139126 env[1141]: time="2024-02-09T10:08:50.139089911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.141415 env[1141]: time="2024-02-09T10:08:50.141388541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.143299 env[1141]: time="2024-02-09T10:08:50.143273887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.144005 env[1141]: time="2024-02-09T10:08:50.143980832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.144745 env[1141]: time="2024-02-09T10:08:50.144712908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.145509 env[1141]: time="2024-02-09T10:08:50.145482142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:08:50.176573 env[1141]: time="2024-02-09T10:08:50.176404253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:08:50.176573 env[1141]: time="2024-02-09T10:08:50.176453556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:08:50.176573 env[1141]: time="2024-02-09T10:08:50.176464863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:08:50.176762 env[1141]: time="2024-02-09T10:08:50.176667869Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fbc2a5b3278818a3c5a2cda762404f5b3253abe269e8f30ee67ae5366eb29df pid=1700 runtime=io.containerd.runc.v2 Feb 9 10:08:50.177287 env[1141]: time="2024-02-09T10:08:50.177205409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:08:50.177287 env[1141]: time="2024-02-09T10:08:50.177253594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:08:50.177287 env[1141]: time="2024-02-09T10:08:50.177263902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:08:50.177468 env[1141]: time="2024-02-09T10:08:50.177378809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36745d4efbc1028c5dd6c878c8ff949292ab1bed2386b412ad6a089c41c59d30 pid=1699 runtime=io.containerd.runc.v2 Feb 9 10:08:50.177886 env[1141]: time="2024-02-09T10:08:50.177708749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:08:50.177886 env[1141]: time="2024-02-09T10:08:50.177741831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:08:50.177886 env[1141]: time="2024-02-09T10:08:50.177751659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:08:50.178621 env[1141]: time="2024-02-09T10:08:50.177942959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd3f48597bbcbca2a51999d8f87d7a12764009a357ac61351a02f6ea13b36529 pid=1701 runtime=io.containerd.runc.v2 Feb 9 10:08:50.189371 systemd[1]: Started cri-containerd-36745d4efbc1028c5dd6c878c8ff949292ab1bed2386b412ad6a089c41c59d30.scope. Feb 9 10:08:50.192172 systemd[1]: Started cri-containerd-5fbc2a5b3278818a3c5a2cda762404f5b3253abe269e8f30ee67ae5366eb29df.scope. Feb 9 10:08:50.200809 kubelet[1641]: W0209 10:08:50.200696 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:50.200809 kubelet[1641]: E0209 10:08:50.200763 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:50.209285 systemd[1]: Started cri-containerd-cd3f48597bbcbca2a51999d8f87d7a12764009a357ac61351a02f6ea13b36529.scope. Feb 9 10:08:50.249368 kubelet[1641]: W0209 10:08:50.249305 1641 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:50.249368 kubelet[1641]: E0209 10:08:50.249368 1641 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Feb 9 10:08:50.262023 env[1141]: time="2024-02-09T10:08:50.261973364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"36745d4efbc1028c5dd6c878c8ff949292ab1bed2386b412ad6a089c41c59d30\"" Feb 9 10:08:50.263021 kubelet[1641]: E0209 10:08:50.262870 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:50.263743 env[1141]: time="2024-02-09T10:08:50.263711919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:463ea25710b7eab370f98da677afb9fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fbc2a5b3278818a3c5a2cda762404f5b3253abe269e8f30ee67ae5366eb29df\"" Feb 9 10:08:50.264362 kubelet[1641]: E0209 10:08:50.264334 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:50.267239 env[1141]: time="2024-02-09T10:08:50.267204053Z" level=info msg="CreateContainer within sandbox \"5fbc2a5b3278818a3c5a2cda762404f5b3253abe269e8f30ee67ae5366eb29df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 10:08:50.267400 env[1141]: time="2024-02-09T10:08:50.267377174Z" level=info msg="CreateContainer within sandbox \"36745d4efbc1028c5dd6c878c8ff949292ab1bed2386b412ad6a089c41c59d30\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 10:08:50.277151 env[1141]: time="2024-02-09T10:08:50.277120062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd3f48597bbcbca2a51999d8f87d7a12764009a357ac61351a02f6ea13b36529\"" Feb 9 10:08:50.277870 env[1141]: time="2024-02-09T10:08:50.277832520Z" level=info msg="CreateContainer within sandbox \"5fbc2a5b3278818a3c5a2cda762404f5b3253abe269e8f30ee67ae5366eb29df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3088d2a40c585d90fee53176c4c97512dd4917bad03c4dd4a83b2fbb48e855d3\"" Feb 9 10:08:50.278305 kubelet[1641]: E0209 10:08:50.278160 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:50.278534 env[1141]: time="2024-02-09T10:08:50.278496994Z" level=info msg="StartContainer for \"3088d2a40c585d90fee53176c4c97512dd4917bad03c4dd4a83b2fbb48e855d3\"" Feb 9 10:08:50.280580 env[1141]: time="2024-02-09T10:08:50.280545393Z" level=info msg="CreateContainer within sandbox \"cd3f48597bbcbca2a51999d8f87d7a12764009a357ac61351a02f6ea13b36529\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 10:08:50.284665 env[1141]: time="2024-02-09T10:08:50.284628965Z" level=info msg="CreateContainer within sandbox \"36745d4efbc1028c5dd6c878c8ff949292ab1bed2386b412ad6a089c41c59d30\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c6600e71f71dd6983ad3e6f3fde1ebc0f669d742452b4c4fecefd716df26ba9\"" Feb 9 10:08:50.285217 env[1141]: time="2024-02-09T10:08:50.285185284Z" level=info msg="StartContainer for \"5c6600e71f71dd6983ad3e6f3fde1ebc0f669d742452b4c4fecefd716df26ba9\"" Feb 9 10:08:50.292673 env[1141]: time="2024-02-09T10:08:50.292625826Z" level=info msg="CreateContainer within sandbox \"cd3f48597bbcbca2a51999d8f87d7a12764009a357ac61351a02f6ea13b36529\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a825a0f262be1acc7964e1054fc61f04bb602e8c60baf97ccc8b5e552a17083\"" Feb 9 10:08:50.293216 env[1141]: time="2024-02-09T10:08:50.293190455Z" level=info msg="StartContainer for \"3a825a0f262be1acc7964e1054fc61f04bb602e8c60baf97ccc8b5e552a17083\"" Feb 9 10:08:50.298395 systemd[1]: Started cri-containerd-3088d2a40c585d90fee53176c4c97512dd4917bad03c4dd4a83b2fbb48e855d3.scope. Feb 9 10:08:50.320423 systemd[1]: Started cri-containerd-3a825a0f262be1acc7964e1054fc61f04bb602e8c60baf97ccc8b5e552a17083.scope. Feb 9 10:08:50.325661 systemd[1]: Started cri-containerd-5c6600e71f71dd6983ad3e6f3fde1ebc0f669d742452b4c4fecefd716df26ba9.scope. Feb 9 10:08:50.421620 env[1141]: time="2024-02-09T10:08:50.421517951Z" level=info msg="StartContainer for \"3088d2a40c585d90fee53176c4c97512dd4917bad03c4dd4a83b2fbb48e855d3\" returns successfully" Feb 9 10:08:50.428435 env[1141]: time="2024-02-09T10:08:50.428374007Z" level=info msg="StartContainer for \"5c6600e71f71dd6983ad3e6f3fde1ebc0f669d742452b4c4fecefd716df26ba9\" returns successfully" Feb 9 10:08:50.457205 env[1141]: time="2024-02-09T10:08:50.457159861Z" level=info msg="StartContainer for \"3a825a0f262be1acc7964e1054fc61f04bb602e8c60baf97ccc8b5e552a17083\" returns successfully" Feb 9 10:08:50.624294 kubelet[1641]: E0209 10:08:50.624263 1641 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Feb 9 10:08:50.727845 kubelet[1641]: I0209 10:08:50.727746 1641 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:08:51.245592 kubelet[1641]: E0209 10:08:51.245559 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:51.253239 kubelet[1641]: E0209 10:08:51.253182 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:51.254714 kubelet[1641]: E0209 10:08:51.254671 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:52.256572 kubelet[1641]: E0209 10:08:52.256546 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:52.429510 kubelet[1641]: I0209 10:08:52.429476 1641 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 10:08:52.458279 kubelet[1641]: E0209 10:08:52.458233 1641 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 10:08:52.508915 kubelet[1641]: E0209 10:08:52.508729 1641 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 9 10:08:52.558471 kubelet[1641]: E0209 10:08:52.558428 1641 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 10:08:52.659040 kubelet[1641]: E0209 10:08:52.659012 1641 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 10:08:52.759725 kubelet[1641]: E0209 10:08:52.759627 1641 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 10:08:52.860141 kubelet[1641]: E0209 10:08:52.860113 1641 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 10:08:53.219235 kubelet[1641]: I0209 10:08:53.219133 1641 apiserver.go:52] "Watching apiserver" Feb 9 10:08:53.286948 kubelet[1641]: E0209 10:08:53.286539 1641 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:53.287303 kubelet[1641]: E0209 10:08:53.286996 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:53.320899 kubelet[1641]: I0209 10:08:53.320773 1641 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 10:08:53.655424 kubelet[1641]: E0209 10:08:53.655380 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:54.258479 kubelet[1641]: E0209 10:08:54.258447 1641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:55.249978 systemd[1]: Reloading. Feb 9 10:08:55.295593 /usr/lib/systemd/system-generators/torcx-generator[1943]: time="2024-02-09T10:08:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:08:55.295624 /usr/lib/systemd/system-generators/torcx-generator[1943]: time="2024-02-09T10:08:55Z" level=info msg="torcx already run" Feb 9 10:08:55.350335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:08:55.350355 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:08:55.365831 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:08:55.444361 systemd[1]: Stopping kubelet.service... Feb 9 10:08:55.464229 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 10:08:55.464425 systemd[1]: Stopped kubelet.service. Feb 9 10:08:55.466161 systemd[1]: Started kubelet.service. Feb 9 10:08:55.509781 kubelet[1981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:08:55.509781 kubelet[1981]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:08:55.509781 kubelet[1981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:08:55.509781 kubelet[1981]: I0209 10:08:55.509682 1981 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:08:55.515772 kubelet[1981]: I0209 10:08:55.515743 1981 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 10:08:55.515920 kubelet[1981]: I0209 10:08:55.515907 1981 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:08:55.516268 kubelet[1981]: I0209 10:08:55.516249 1981 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 10:08:55.518203 kubelet[1981]: I0209 10:08:55.518175 1981 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 10:08:55.520012 kubelet[1981]: I0209 10:08:55.519967 1981 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:08:55.523273 kubelet[1981]: W0209 10:08:55.523250 1981 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:08:55.524129 kubelet[1981]: I0209 10:08:55.524107 1981 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:08:55.524414 kubelet[1981]: I0209 10:08:55.524397 1981 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:08:55.524844 kubelet[1981]: I0209 10:08:55.524795 1981 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 10:08:55.524990 kubelet[1981]: I0209 10:08:55.524975 1981 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 10:08:55.525051 kubelet[1981]: I0209 10:08:55.525042 1981 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 10:08:55.525129 kubelet[1981]: I0209 10:08:55.525119 1981 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:08:55.525272 kubelet[1981]: I0209 10:08:55.525259 1981 kubelet.go:393] "Attempting to sync node with API server" Feb 9 10:08:55.525365 kubelet[1981]: I0209 10:08:55.525353 1981 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:08:55.525433 kubelet[1981]: I0209 10:08:55.525424 1981 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:08:55.525498 kubelet[1981]: I0209 10:08:55.525488 1981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:08:55.526338 kubelet[1981]: I0209 10:08:55.526299 1981 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:08:55.526915 kubelet[1981]: I0209 10:08:55.526888 1981 server.go:1232] "Started kubelet" Feb 9 10:08:55.527110 kubelet[1981]: I0209 10:08:55.527091 1981 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:08:55.527383 kubelet[1981]: I0209 10:08:55.527128 1981 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:08:55.527587 kubelet[1981]: I0209 10:08:55.527559 1981 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 10:08:55.527954 kubelet[1981]: I0209 10:08:55.527935 1981 server.go:462] "Adding debug handlers to kubelet server" Feb 9 10:08:55.528311 kubelet[1981]: I0209 10:08:55.528280 1981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:08:55.528919 kubelet[1981]: I0209 10:08:55.528897 1981 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 10:08:55.529058 kubelet[1981]: E0209 10:08:55.529034 1981 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 10:08:55.529404 kubelet[1981]: E0209 10:08:55.529368 1981 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:08:55.529404 kubelet[1981]: E0209 10:08:55.529401 1981 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:08:55.530280 kubelet[1981]: I0209 10:08:55.530262 1981 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 10:08:55.530462 kubelet[1981]: I0209 10:08:55.530450 1981 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 10:08:55.550390 kubelet[1981]: I0209 10:08:55.550357 1981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 10:08:55.559954 kubelet[1981]: I0209 10:08:55.559934 1981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 10:08:55.560061 kubelet[1981]: I0209 10:08:55.560048 1981 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 10:08:55.560229 kubelet[1981]: I0209 10:08:55.560213 1981 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 10:08:55.560333 kubelet[1981]: E0209 10:08:55.560321 1981 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:08:55.596254 kubelet[1981]: I0209 10:08:55.596220 1981 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:08:55.596254 kubelet[1981]: I0209 10:08:55.596245 1981 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:08:55.596254 kubelet[1981]: I0209 10:08:55.596263 1981 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:08:55.596414 kubelet[1981]: I0209 10:08:55.596401 1981 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 10:08:55.596439 kubelet[1981]: I0209 10:08:55.596421 1981 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 10:08:55.596439 kubelet[1981]: I0209 10:08:55.596428 1981 policy_none.go:49] "None policy: Start" Feb 9 10:08:55.597075 kubelet[1981]: I0209 10:08:55.597053 1981 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:08:55.597075 kubelet[1981]: I0209 10:08:55.597079 1981 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:08:55.597210 kubelet[1981]: I0209 10:08:55.597195 1981 state_mem.go:75] "Updated machine memory state" Feb 9 10:08:55.600708 kubelet[1981]: I0209 10:08:55.600685 1981 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:08:55.600943 kubelet[1981]: I0209 10:08:55.600918 1981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:08:55.635123 kubelet[1981]: I0209 10:08:55.635044 1981 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:08:55.636300 sudo[2012]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 10:08:55.636496 sudo[2012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 10:08:55.642786 kubelet[1981]: I0209 10:08:55.642754 1981 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 10:08:55.642869 kubelet[1981]: I0209 10:08:55.642858 1981 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 10:08:55.660830 kubelet[1981]: I0209 10:08:55.660797 1981 topology_manager.go:215] "Topology Admit Handler" podUID="463ea25710b7eab370f98da677afb9fd" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 10:08:55.661038 kubelet[1981]: I0209 10:08:55.661018 1981 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 10:08:55.661159 kubelet[1981]: I0209 10:08:55.661144 1981 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 10:08:55.667487 kubelet[1981]: E0209 10:08:55.667406 1981 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:55.731676 kubelet[1981]: I0209 10:08:55.731646 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/463ea25710b7eab370f98da677afb9fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"463ea25710b7eab370f98da677afb9fd\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:55.731801 kubelet[1981]: I0209 10:08:55.731696 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:55.731801 kubelet[1981]: I0209 10:08:55.731722 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:55.731801 kubelet[1981]: I0209 10:08:55.731742 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 10:08:55.731801 kubelet[1981]: I0209 10:08:55.731774 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:55.731928 kubelet[1981]: I0209 10:08:55.731827 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/463ea25710b7eab370f98da677afb9fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"463ea25710b7eab370f98da677afb9fd\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:55.731928 kubelet[1981]: I0209 10:08:55.731865 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/463ea25710b7eab370f98da677afb9fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"463ea25710b7eab370f98da677afb9fd\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:55.731928 kubelet[1981]: I0209 10:08:55.731884 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:55.731928 kubelet[1981]: I0209 10:08:55.731901 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:08:55.968568 kubelet[1981]: E0209 10:08:55.968467 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:55.968568 kubelet[1981]: E0209 10:08:55.968552 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:55.968722 kubelet[1981]: E0209 10:08:55.968706 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:56.107677 sudo[2012]: pam_unix(sudo:session): session closed for user root Feb 9 10:08:56.526076 kubelet[1981]: I0209 10:08:56.526030 1981 apiserver.go:52] "Watching apiserver" Feb 9 10:08:56.530947 kubelet[1981]: I0209 10:08:56.530923 1981 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 10:08:56.577265 kubelet[1981]: E0209 10:08:56.577228 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:56.577783 kubelet[1981]: E0209 10:08:56.577766 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:56.584041 kubelet[1981]: E0209 10:08:56.583973 1981 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 10:08:56.584448 kubelet[1981]: E0209 10:08:56.584425 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:56.605911 kubelet[1981]: I0209 10:08:56.605881 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6058371500000002 podCreationTimestamp="2024-02-09 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:08:56.598411749 +0000 UTC m=+1.129041769" watchObservedRunningTime="2024-02-09 10:08:56.60583715 +0000 UTC m=+1.136467130" Feb 9 10:08:56.613206 kubelet[1981]: I0209 10:08:56.613176 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.613143172 podCreationTimestamp="2024-02-09 10:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:08:56.606170857 +0000 UTC m=+1.136800877" watchObservedRunningTime="2024-02-09 10:08:56.613143172 +0000 UTC m=+1.143773192" Feb 9 10:08:56.613369 kubelet[1981]: I0209 10:08:56.613355 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.613338111 podCreationTimestamp="2024-02-09 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:08:56.612403114 +0000 UTC m=+1.143033134" watchObservedRunningTime="2024-02-09 10:08:56.613338111 +0000 UTC m=+1.143968091" Feb 9 10:08:57.579336 kubelet[1981]: E0209 10:08:57.579303 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:08:57.873439 sudo[1236]: pam_unix(sudo:session): session closed for user root Feb 9 10:08:57.875094 sshd[1232]: pam_unix(sshd:session): session closed for user core Feb 9 10:08:57.877842 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:59196.service: Deactivated successfully. Feb 9 10:08:57.878698 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 10:08:57.878930 systemd[1]: session-5.scope: Consumed 6.656s CPU time. Feb 9 10:08:57.879371 systemd-logind[1130]: Session 5 logged out. Waiting for processes to exit. Feb 9 10:08:57.880602 systemd-logind[1130]: Removed session 5. Feb 9 10:09:02.388952 kubelet[1981]: E0209 10:09:02.388924 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:02.586719 kubelet[1981]: E0209 10:09:02.586693 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:04.102878 kubelet[1981]: E0209 10:09:04.102804 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:04.590666 kubelet[1981]: E0209 10:09:04.590633 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:05.807790 kubelet[1981]: E0209 10:09:05.807760 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:06.593832 kubelet[1981]: E0209 10:09:06.593791 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:08.817741 kubelet[1981]: I0209 10:09:08.817697 1981 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 10:09:08.818112 env[1141]: time="2024-02-09T10:09:08.818063278Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 10:09:08.818299 kubelet[1981]: I0209 10:09:08.818242 1981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 10:09:09.553374 kubelet[1981]: I0209 10:09:09.553323 1981 topology_manager.go:215] "Topology Admit Handler" podUID="ff5af69b-c317-4b44-be48-a07d40addb5a" podNamespace="kube-system" podName="kube-proxy-bzh5n" Feb 9 10:09:09.555556 kubelet[1981]: I0209 10:09:09.555528 1981 topology_manager.go:215] "Topology Admit Handler" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" podNamespace="kube-system" podName="cilium-8gmq2" Feb 9 10:09:09.558757 systemd[1]: Created slice kubepods-besteffort-podff5af69b_c317_4b44_be48_a07d40addb5a.slice. Feb 9 10:09:09.569568 systemd[1]: Created slice kubepods-burstable-pod6d665afb_7ae4_4070_9f31_7e377210da75.slice. Feb 9 10:09:09.634650 kubelet[1981]: I0209 10:09:09.634602 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-hubble-tls\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.634782 kubelet[1981]: I0209 10:09:09.634663 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-run\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.634782 kubelet[1981]: I0209 10:09:09.634686 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cni-path\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.634782 kubelet[1981]: I0209 10:09:09.634705 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-xtables-lock\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.634900 kubelet[1981]: I0209 10:09:09.634779 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff5af69b-c317-4b44-be48-a07d40addb5a-kube-proxy\") pod \"kube-proxy-bzh5n\" (UID: \"ff5af69b-c317-4b44-be48-a07d40addb5a\") " pod="kube-system/kube-proxy-bzh5n" Feb 9 10:09:09.634900 kubelet[1981]: I0209 10:09:09.634806 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff5af69b-c317-4b44-be48-a07d40addb5a-lib-modules\") pod \"kube-proxy-bzh5n\" (UID: \"ff5af69b-c317-4b44-be48-a07d40addb5a\") " pod="kube-system/kube-proxy-bzh5n" Feb 9 10:09:09.634900 kubelet[1981]: I0209 10:09:09.634834 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-cgroup\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.634900 kubelet[1981]: I0209 10:09:09.634857 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-kernel\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635039 kubelet[1981]: I0209 10:09:09.634907 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-lib-modules\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635039 kubelet[1981]: I0209 10:09:09.634940 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-bpf-maps\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635039 kubelet[1981]: I0209 10:09:09.634961 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-config-path\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635039 kubelet[1981]: I0209 10:09:09.634985 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62td\" (UniqueName: \"kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-kube-api-access-x62td\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635039 kubelet[1981]: I0209 10:09:09.635009 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-etc-cni-netd\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635039 kubelet[1981]: I0209 10:09:09.635036 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d665afb-7ae4-4070-9f31-7e377210da75-clustermesh-secrets\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635177 kubelet[1981]: I0209 10:09:09.635058 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff5af69b-c317-4b44-be48-a07d40addb5a-xtables-lock\") pod \"kube-proxy-bzh5n\" (UID: \"ff5af69b-c317-4b44-be48-a07d40addb5a\") " pod="kube-system/kube-proxy-bzh5n" Feb 9 10:09:09.635177 kubelet[1981]: I0209 10:09:09.635087 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdv8\" (UniqueName: \"kubernetes.io/projected/ff5af69b-c317-4b44-be48-a07d40addb5a-kube-api-access-jmdv8\") pod \"kube-proxy-bzh5n\" (UID: \"ff5af69b-c317-4b44-be48-a07d40addb5a\") " pod="kube-system/kube-proxy-bzh5n" Feb 9 10:09:09.635177 kubelet[1981]: I0209 10:09:09.635119 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-hostproc\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.635177 kubelet[1981]: I0209 10:09:09.635139 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-net\") pod \"cilium-8gmq2\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " pod="kube-system/cilium-8gmq2" Feb 9 10:09:09.798736 kubelet[1981]: I0209 10:09:09.798679 1981 topology_manager.go:215] "Topology Admit Handler" podUID="ed9385fe-b3dd-44b1-9be6-cac614e5c5fe" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-28fxx" Feb 9 10:09:09.803780 systemd[1]: Created slice kubepods-besteffort-poded9385fe_b3dd_44b1_9be6_cac614e5c5fe.slice. Feb 9 10:09:09.836416 kubelet[1981]: I0209 10:09:09.836362 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-28fxx\" (UID: \"ed9385fe-b3dd-44b1-9be6-cac614e5c5fe\") " pod="kube-system/cilium-operator-6bc8ccdb58-28fxx" Feb 9 10:09:09.836416 kubelet[1981]: I0209 10:09:09.836422 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqbn\" (UniqueName: \"kubernetes.io/projected/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-kube-api-access-lmqbn\") pod \"cilium-operator-6bc8ccdb58-28fxx\" (UID: \"ed9385fe-b3dd-44b1-9be6-cac614e5c5fe\") " pod="kube-system/cilium-operator-6bc8ccdb58-28fxx" Feb 9 10:09:09.868087 kubelet[1981]: E0209 10:09:09.868056 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:09.868885 env[1141]: time="2024-02-09T10:09:09.868844689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzh5n,Uid:ff5af69b-c317-4b44-be48-a07d40addb5a,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:09.872365 kubelet[1981]: E0209 10:09:09.872342 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:09.873181 env[1141]: time="2024-02-09T10:09:09.872958999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gmq2,Uid:6d665afb-7ae4-4070-9f31-7e377210da75,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:09.894797 env[1141]: time="2024-02-09T10:09:09.894417306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:09.894797 env[1141]: time="2024-02-09T10:09:09.894468548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:09.894797 env[1141]: time="2024-02-09T10:09:09.894478788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:09.895191 env[1141]: time="2024-02-09T10:09:09.895143899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41d989e7fa753089fde353ea217fd05efb294d878c432c5d6776e39954306a pid=2076 runtime=io.containerd.runc.v2 Feb 9 10:09:09.898627 env[1141]: time="2024-02-09T10:09:09.898348206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:09.898627 env[1141]: time="2024-02-09T10:09:09.898390688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:09.898627 env[1141]: time="2024-02-09T10:09:09.898400849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:09.898627 env[1141]: time="2024-02-09T10:09:09.898571017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b pid=2094 runtime=io.containerd.runc.v2 Feb 9 10:09:09.906560 systemd[1]: Started cri-containerd-3f41d989e7fa753089fde353ea217fd05efb294d878c432c5d6776e39954306a.scope. Feb 9 10:09:09.913193 systemd[1]: Started cri-containerd-9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b.scope. Feb 9 10:09:09.960252 env[1141]: time="2024-02-09T10:09:09.960206292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzh5n,Uid:ff5af69b-c317-4b44-be48-a07d40addb5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f41d989e7fa753089fde353ea217fd05efb294d878c432c5d6776e39954306a\"" Feb 9 10:09:09.960956 kubelet[1981]: E0209 10:09:09.960935 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:09.963470 env[1141]: time="2024-02-09T10:09:09.963418519Z" level=info msg="CreateContainer within sandbox \"3f41d989e7fa753089fde353ea217fd05efb294d878c432c5d6776e39954306a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 10:09:09.964432 update_engine[1133]: I0209 10:09:09.964404 1133 update_attempter.cc:509] Updating boot flags... Feb 9 10:09:09.969809 env[1141]: time="2024-02-09T10:09:09.969776772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gmq2,Uid:6d665afb-7ae4-4070-9f31-7e377210da75,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\"" Feb 9 10:09:09.970761 kubelet[1981]: E0209 10:09:09.970554 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:09.971733 env[1141]: time="2024-02-09T10:09:09.971705020Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 10:09:09.988334 env[1141]: time="2024-02-09T10:09:09.988223820Z" level=info msg="CreateContainer within sandbox \"3f41d989e7fa753089fde353ea217fd05efb294d878c432c5d6776e39954306a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17bc9868b0eb9d504adbf19868349321fef225e725f01fa3254f29d207ebcb74\"" Feb 9 10:09:09.989105 env[1141]: time="2024-02-09T10:09:09.989058539Z" level=info msg="StartContainer for \"17bc9868b0eb9d504adbf19868349321fef225e725f01fa3254f29d207ebcb74\"" Feb 9 10:09:10.022026 systemd[1]: Started cri-containerd-17bc9868b0eb9d504adbf19868349321fef225e725f01fa3254f29d207ebcb74.scope. Feb 9 10:09:10.082099 env[1141]: time="2024-02-09T10:09:10.081275000Z" level=info msg="StartContainer for \"17bc9868b0eb9d504adbf19868349321fef225e725f01fa3254f29d207ebcb74\" returns successfully" Feb 9 10:09:10.105624 kubelet[1981]: E0209 10:09:10.105584 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:10.107436 env[1141]: time="2024-02-09T10:09:10.106118807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-28fxx,Uid:ed9385fe-b3dd-44b1-9be6-cac614e5c5fe,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:10.121162 env[1141]: time="2024-02-09T10:09:10.121093102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:10.121285 env[1141]: time="2024-02-09T10:09:10.121176465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:10.121285 env[1141]: time="2024-02-09T10:09:10.121202267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:10.121392 env[1141]: time="2024-02-09T10:09:10.121363394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b pid=2209 runtime=io.containerd.runc.v2 Feb 9 10:09:10.132526 systemd[1]: Started cri-containerd-f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b.scope. Feb 9 10:09:10.203242 env[1141]: time="2024-02-09T10:09:10.203187092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-28fxx,Uid:ed9385fe-b3dd-44b1-9be6-cac614e5c5fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b\"" Feb 9 10:09:10.203883 kubelet[1981]: E0209 10:09:10.203856 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:10.606161 kubelet[1981]: E0209 10:09:10.606127 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:10.613845 kubelet[1981]: I0209 10:09:10.613595 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bzh5n" podStartSLOduration=1.613563919 podCreationTimestamp="2024-02-09 10:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:09:10.613099699 +0000 UTC m=+15.143729719" watchObservedRunningTime="2024-02-09 10:09:10.613563919 +0000 UTC m=+15.144193939" Feb 9 10:09:13.302549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204578904.mount: Deactivated successfully. Feb 9 10:09:15.593580 env[1141]: time="2024-02-09T10:09:15.593534609Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:15.595288 env[1141]: time="2024-02-09T10:09:15.595248348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:15.597223 env[1141]: time="2024-02-09T10:09:15.597191975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:15.597756 env[1141]: time="2024-02-09T10:09:15.597723753Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 10:09:15.600447 env[1141]: time="2024-02-09T10:09:15.600415126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 10:09:15.603491 env[1141]: time="2024-02-09T10:09:15.603460790Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:09:15.614683 env[1141]: time="2024-02-09T10:09:15.614649335Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\"" Feb 9 10:09:15.615126 env[1141]: time="2024-02-09T10:09:15.615103031Z" level=info msg="StartContainer for \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\"" Feb 9 10:09:15.637790 systemd[1]: run-containerd-runc-k8s.io-f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa-runc.x3k0Gj.mount: Deactivated successfully. Feb 9 10:09:15.639476 systemd[1]: Started cri-containerd-f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa.scope. Feb 9 10:09:15.693250 kubelet[1981]: E0209 10:09:15.692771 1981 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d665afb_7ae4_4070_9f31_7e377210da75.slice/cri-containerd-f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa.scope\": RecentStats: unable to find data in memory cache]" Feb 9 10:09:15.693852 env[1141]: time="2024-02-09T10:09:15.693609009Z" level=info msg="StartContainer for \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\" returns successfully" Feb 9 10:09:15.728493 systemd[1]: cri-containerd-f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa.scope: Deactivated successfully. Feb 9 10:09:15.837908 env[1141]: time="2024-02-09T10:09:15.837859608Z" level=info msg="shim disconnected" id=f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa Feb 9 10:09:15.837908 env[1141]: time="2024-02-09T10:09:15.837904650Z" level=warning msg="cleaning up after shim disconnected" id=f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa namespace=k8s.io Feb 9 10:09:15.837908 env[1141]: time="2024-02-09T10:09:15.837913970Z" level=info msg="cleaning up dead shim" Feb 9 10:09:15.847347 env[1141]: time="2024-02-09T10:09:15.846739194Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2413 runtime=io.containerd.runc.v2\n" Feb 9 10:09:16.612117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa-rootfs.mount: Deactivated successfully. Feb 9 10:09:16.621857 kubelet[1981]: E0209 10:09:16.621821 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:16.627631 env[1141]: time="2024-02-09T10:09:16.627094899Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:09:16.648635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387107794.mount: Deactivated successfully. Feb 9 10:09:16.652206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594875050.mount: Deactivated successfully. Feb 9 10:09:16.656360 env[1141]: time="2024-02-09T10:09:16.656303619Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\"" Feb 9 10:09:16.658458 env[1141]: time="2024-02-09T10:09:16.658028155Z" level=info msg="StartContainer for \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\"" Feb 9 10:09:16.672752 systemd[1]: Started cri-containerd-5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854.scope. Feb 9 10:09:16.739171 env[1141]: time="2024-02-09T10:09:16.738963293Z" level=info msg="StartContainer for \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\" returns successfully" Feb 9 10:09:16.743632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:09:16.743831 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:09:16.743997 systemd[1]: Stopping systemd-sysctl.service... Feb 9 10:09:16.745876 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:09:16.749187 systemd[1]: cri-containerd-5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854.scope: Deactivated successfully. Feb 9 10:09:16.755454 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:09:16.783167 env[1141]: time="2024-02-09T10:09:16.783121024Z" level=info msg="shim disconnected" id=5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854 Feb 9 10:09:16.783167 env[1141]: time="2024-02-09T10:09:16.783168385Z" level=warning msg="cleaning up after shim disconnected" id=5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854 namespace=k8s.io Feb 9 10:09:16.783374 env[1141]: time="2024-02-09T10:09:16.783179626Z" level=info msg="cleaning up dead shim" Feb 9 10:09:16.789284 env[1141]: time="2024-02-09T10:09:16.789245905Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2477 runtime=io.containerd.runc.v2\n" Feb 9 10:09:17.098326 env[1141]: time="2024-02-09T10:09:17.098272154Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:17.099986 env[1141]: time="2024-02-09T10:09:17.099952767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:17.105140 env[1141]: time="2024-02-09T10:09:17.105106928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:17.105537 env[1141]: time="2024-02-09T10:09:17.105500021Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 10:09:17.108618 env[1141]: time="2024-02-09T10:09:17.108120623Z" level=info msg="CreateContainer within sandbox \"f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 10:09:17.118693 env[1141]: time="2024-02-09T10:09:17.118661994Z" level=info msg="CreateContainer within sandbox \"f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\"" Feb 9 10:09:17.119229 env[1141]: time="2024-02-09T10:09:17.119159970Z" level=info msg="StartContainer for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\"" Feb 9 10:09:17.132942 systemd[1]: Started cri-containerd-e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734.scope. Feb 9 10:09:17.181745 env[1141]: time="2024-02-09T10:09:17.181704774Z" level=info msg="StartContainer for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" returns successfully" Feb 9 10:09:17.622443 kubelet[1981]: E0209 10:09:17.622414 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:17.624584 kubelet[1981]: E0209 10:09:17.624561 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:17.626731 env[1141]: time="2024-02-09T10:09:17.626464941Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:09:17.645306 env[1141]: time="2024-02-09T10:09:17.645258011Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\"" Feb 9 10:09:17.646472 env[1141]: time="2024-02-09T10:09:17.646440808Z" level=info msg="StartContainer for \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\"" Feb 9 10:09:17.657474 kubelet[1981]: I0209 10:09:17.657441 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-28fxx" podStartSLOduration=1.7563366409999999 podCreationTimestamp="2024-02-09 10:09:09 +0000 UTC" firstStartedPulling="2024-02-09 10:09:10.204650636 +0000 UTC m=+14.735280656" lastFinishedPulling="2024-02-09 10:09:17.105708747 +0000 UTC m=+21.636338767" observedRunningTime="2024-02-09 10:09:17.63885953 +0000 UTC m=+22.169489550" watchObservedRunningTime="2024-02-09 10:09:17.657394752 +0000 UTC m=+22.188024732" Feb 9 10:09:17.668648 systemd[1]: Started cri-containerd-53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b.scope. Feb 9 10:09:17.726288 env[1141]: time="2024-02-09T10:09:17.726238914Z" level=info msg="StartContainer for \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\" returns successfully" Feb 9 10:09:17.747881 systemd[1]: cri-containerd-53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b.scope: Deactivated successfully. Feb 9 10:09:17.812959 env[1141]: time="2024-02-09T10:09:17.812900396Z" level=info msg="shim disconnected" id=53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b Feb 9 10:09:17.812959 env[1141]: time="2024-02-09T10:09:17.812952597Z" level=warning msg="cleaning up after shim disconnected" id=53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b namespace=k8s.io Feb 9 10:09:17.812959 env[1141]: time="2024-02-09T10:09:17.812962157Z" level=info msg="cleaning up dead shim" Feb 9 10:09:17.820018 env[1141]: time="2024-02-09T10:09:17.819979218Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Feb 9 10:09:18.611944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b-rootfs.mount: Deactivated successfully. Feb 9 10:09:18.631491 kubelet[1981]: E0209 10:09:18.631466 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:18.631828 kubelet[1981]: E0209 10:09:18.631518 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:18.633562 env[1141]: time="2024-02-09T10:09:18.633522472Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:09:18.652694 env[1141]: time="2024-02-09T10:09:18.652633606Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\"" Feb 9 10:09:18.653458 env[1141]: time="2024-02-09T10:09:18.653422190Z" level=info msg="StartContainer for \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\"" Feb 9 10:09:18.688269 systemd[1]: Started cri-containerd-5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31.scope. Feb 9 10:09:18.722882 systemd[1]: cri-containerd-5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31.scope: Deactivated successfully. Feb 9 10:09:18.726022 env[1141]: time="2024-02-09T10:09:18.725926249Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d665afb_7ae4_4070_9f31_7e377210da75.slice/cri-containerd-5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31.scope/memory.events\": no such file or directory" Feb 9 10:09:18.726606 env[1141]: time="2024-02-09T10:09:18.726567108Z" level=info msg="StartContainer for \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\" returns successfully" Feb 9 10:09:18.748129 env[1141]: time="2024-02-09T10:09:18.748082595Z" level=info msg="shim disconnected" id=5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31 Feb 9 10:09:18.748129 env[1141]: time="2024-02-09T10:09:18.748126996Z" level=warning msg="cleaning up after shim disconnected" id=5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31 namespace=k8s.io Feb 9 10:09:18.748316 env[1141]: time="2024-02-09T10:09:18.748138717Z" level=info msg="cleaning up dead shim" Feb 9 10:09:18.754872 env[1141]: time="2024-02-09T10:09:18.754836398Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2630 runtime=io.containerd.runc.v2\n" Feb 9 10:09:19.617122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31-rootfs.mount: Deactivated successfully. Feb 9 10:09:19.635584 kubelet[1981]: E0209 10:09:19.635557 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:19.640934 env[1141]: time="2024-02-09T10:09:19.640883537Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:09:19.711170 env[1141]: time="2024-02-09T10:09:19.711125239Z" level=info msg="CreateContainer within sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\"" Feb 9 10:09:19.712479 env[1141]: time="2024-02-09T10:09:19.712199390Z" level=info msg="StartContainer for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\"" Feb 9 10:09:19.729081 systemd[1]: Started cri-containerd-c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a.scope. Feb 9 10:09:19.770281 env[1141]: time="2024-02-09T10:09:19.770231181Z" level=info msg="StartContainer for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" returns successfully" Feb 9 10:09:20.004672 kubelet[1981]: I0209 10:09:20.003860 1981 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 10:09:20.044068 kubelet[1981]: I0209 10:09:20.043206 1981 topology_manager.go:215] "Topology Admit Handler" podUID="1cee6ee8-4133-4962-afd3-7866493dd87e" podNamespace="kube-system" podName="coredns-5dd5756b68-cxmb4" Feb 9 10:09:20.044068 kubelet[1981]: I0209 10:09:20.043735 1981 topology_manager.go:215] "Topology Admit Handler" podUID="5fdefb04-2b88-4621-824b-f93a773898d1" podNamespace="kube-system" podName="coredns-5dd5756b68-hlntz" Feb 9 10:09:20.049129 systemd[1]: Created slice kubepods-burstable-pod1cee6ee8_4133_4962_afd3_7866493dd87e.slice. Feb 9 10:09:20.054458 systemd[1]: Created slice kubepods-burstable-pod5fdefb04_2b88_4621_824b_f93a773898d1.slice. Feb 9 10:09:20.121594 kubelet[1981]: I0209 10:09:20.121554 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fdefb04-2b88-4621-824b-f93a773898d1-config-volume\") pod \"coredns-5dd5756b68-hlntz\" (UID: \"5fdefb04-2b88-4621-824b-f93a773898d1\") " pod="kube-system/coredns-5dd5756b68-hlntz" Feb 9 10:09:20.121756 kubelet[1981]: I0209 10:09:20.121606 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwms\" (UniqueName: \"kubernetes.io/projected/1cee6ee8-4133-4962-afd3-7866493dd87e-kube-api-access-hkwms\") pod \"coredns-5dd5756b68-cxmb4\" (UID: \"1cee6ee8-4133-4962-afd3-7866493dd87e\") " pod="kube-system/coredns-5dd5756b68-cxmb4" Feb 9 10:09:20.121756 kubelet[1981]: I0209 10:09:20.121629 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cee6ee8-4133-4962-afd3-7866493dd87e-config-volume\") pod \"coredns-5dd5756b68-cxmb4\" (UID: \"1cee6ee8-4133-4962-afd3-7866493dd87e\") " pod="kube-system/coredns-5dd5756b68-cxmb4" Feb 9 10:09:20.121756 kubelet[1981]: I0209 10:09:20.121667 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5b4\" (UniqueName: \"kubernetes.io/projected/5fdefb04-2b88-4621-824b-f93a773898d1-kube-api-access-sz5b4\") pod \"coredns-5dd5756b68-hlntz\" (UID: \"5fdefb04-2b88-4621-824b-f93a773898d1\") " pod="kube-system/coredns-5dd5756b68-hlntz" Feb 9 10:09:20.180846 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:09:20.352932 kubelet[1981]: E0209 10:09:20.352841 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:20.354240 env[1141]: time="2024-02-09T10:09:20.354201253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cxmb4,Uid:1cee6ee8-4133-4962-afd3-7866493dd87e,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:20.356432 kubelet[1981]: E0209 10:09:20.356410 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:20.356962 env[1141]: time="2024-02-09T10:09:20.356914008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hlntz,Uid:5fdefb04-2b88-4621-824b-f93a773898d1,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:20.491841 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:09:20.639455 kubelet[1981]: E0209 10:09:20.639352 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:21.641084 kubelet[1981]: E0209 10:09:21.641046 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:22.128274 systemd-networkd[1052]: cilium_host: Link UP Feb 9 10:09:22.128732 systemd-networkd[1052]: cilium_net: Link UP Feb 9 10:09:22.132975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 10:09:22.133063 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 10:09:22.133940 systemd-networkd[1052]: cilium_net: Gained carrier Feb 9 10:09:22.134116 systemd-networkd[1052]: cilium_host: Gained carrier Feb 9 10:09:22.221262 systemd-networkd[1052]: cilium_vxlan: Link UP Feb 9 10:09:22.221268 systemd-networkd[1052]: cilium_vxlan: Gained carrier Feb 9 10:09:22.388967 systemd-networkd[1052]: cilium_net: Gained IPv6LL Feb 9 10:09:22.502845 kernel: NET: Registered PF_ALG protocol family Feb 9 10:09:22.643714 kubelet[1981]: E0209 10:09:22.643605 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:22.963967 systemd-networkd[1052]: cilium_host: Gained IPv6LL Feb 9 10:09:23.083852 systemd-networkd[1052]: lxc_health: Link UP Feb 9 10:09:23.090308 systemd-networkd[1052]: lxc_health: Gained carrier Feb 9 10:09:23.090849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:09:23.458325 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:53146.service. Feb 9 10:09:23.491123 systemd-networkd[1052]: lxc94f3ae349aa9: Link UP Feb 9 10:09:23.501851 kernel: eth0: renamed from tmpff5b8 Feb 9 10:09:23.510856 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc94f3ae349aa9: link becomes ready Feb 9 10:09:23.523850 kernel: eth0: renamed from tmp69102 Feb 9 10:09:23.531923 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:09:23.532029 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7baa13f65be6: link becomes ready Feb 9 10:09:23.532181 systemd-networkd[1052]: lxc94f3ae349aa9: Gained carrier Feb 9 10:09:23.532507 systemd-networkd[1052]: lxc7baa13f65be6: Link UP Feb 9 10:09:23.532836 systemd-networkd[1052]: lxc7baa13f65be6: Gained carrier Feb 9 10:09:23.535037 sshd[3159]: Accepted publickey for core from 10.0.0.1 port 53146 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:23.538130 sshd[3159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:23.542400 systemd[1]: Started session-6.scope. Feb 9 10:09:23.543952 systemd-logind[1130]: New session 6 of user core. Feb 9 10:09:23.667953 systemd-networkd[1052]: cilium_vxlan: Gained IPv6LL Feb 9 10:09:23.735173 sshd[3159]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:23.738715 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:53146.service: Deactivated successfully. Feb 9 10:09:23.739460 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 10:09:23.740592 systemd-logind[1130]: Session 6 logged out. Waiting for processes to exit. Feb 9 10:09:23.741507 systemd-logind[1130]: Removed session 6. Feb 9 10:09:23.876134 kubelet[1981]: E0209 10:09:23.876101 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:23.919675 kubelet[1981]: I0209 10:09:23.919645 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8gmq2" podStartSLOduration=9.29068349 podCreationTimestamp="2024-02-09 10:09:09 +0000 UTC" firstStartedPulling="2024-02-09 10:09:09.971214678 +0000 UTC m=+14.501844698" lastFinishedPulling="2024-02-09 10:09:15.600129436 +0000 UTC m=+20.130759456" observedRunningTime="2024-02-09 10:09:20.65547949 +0000 UTC m=+25.186109510" watchObservedRunningTime="2024-02-09 10:09:23.919598248 +0000 UTC m=+28.450228268" Feb 9 10:09:24.307935 systemd-networkd[1052]: lxc_health: Gained IPv6LL Feb 9 10:09:24.756307 systemd-networkd[1052]: lxc94f3ae349aa9: Gained IPv6LL Feb 9 10:09:25.460960 systemd-networkd[1052]: lxc7baa13f65be6: Gained IPv6LL Feb 9 10:09:27.042808 env[1141]: time="2024-02-09T10:09:27.042718583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:27.042808 env[1141]: time="2024-02-09T10:09:27.042762704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:27.042808 env[1141]: time="2024-02-09T10:09:27.042778424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:27.043225 env[1141]: time="2024-02-09T10:09:27.042981189Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69102dd29fcbed027477acdc266e1944985fc59261b3bbf61bed17fb2d3c507d pid=3215 runtime=io.containerd.runc.v2 Feb 9 10:09:27.062705 systemd[1]: Started cri-containerd-69102dd29fcbed027477acdc266e1944985fc59261b3bbf61bed17fb2d3c507d.scope. Feb 9 10:09:27.087921 env[1141]: time="2024-02-09T10:09:27.085224561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:27.087921 env[1141]: time="2024-02-09T10:09:27.085281363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:27.087921 env[1141]: time="2024-02-09T10:09:27.085297323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:27.087921 env[1141]: time="2024-02-09T10:09:27.085516728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff5b8a4d95100860c8dbc0cf5ff9aaf3334080cc811105186a0e66ea1ca76bc4 pid=3247 runtime=io.containerd.runc.v2 Feb 9 10:09:27.100033 systemd[1]: Started cri-containerd-ff5b8a4d95100860c8dbc0cf5ff9aaf3334080cc811105186a0e66ea1ca76bc4.scope. Feb 9 10:09:27.111143 systemd-resolved[1087]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:09:27.124142 systemd-resolved[1087]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:09:27.130592 env[1141]: time="2024-02-09T10:09:27.130556080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hlntz,Uid:5fdefb04-2b88-4621-824b-f93a773898d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"69102dd29fcbed027477acdc266e1944985fc59261b3bbf61bed17fb2d3c507d\"" Feb 9 10:09:27.131673 kubelet[1981]: E0209 10:09:27.131292 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:27.133879 env[1141]: time="2024-02-09T10:09:27.133790868Z" level=info msg="CreateContainer within sandbox \"69102dd29fcbed027477acdc266e1944985fc59261b3bbf61bed17fb2d3c507d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 10:09:27.145416 env[1141]: time="2024-02-09T10:09:27.145359393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cxmb4,Uid:1cee6ee8-4133-4962-afd3-7866493dd87e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff5b8a4d95100860c8dbc0cf5ff9aaf3334080cc811105186a0e66ea1ca76bc4\"" Feb 9 10:09:27.146258 kubelet[1981]: E0209 10:09:27.146099 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:27.149239 env[1141]: time="2024-02-09T10:09:27.149208154Z" level=info msg="CreateContainer within sandbox \"ff5b8a4d95100860c8dbc0cf5ff9aaf3334080cc811105186a0e66ea1ca76bc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 10:09:27.150010 env[1141]: time="2024-02-09T10:09:27.149940049Z" level=info msg="CreateContainer within sandbox \"69102dd29fcbed027477acdc266e1944985fc59261b3bbf61bed17fb2d3c507d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfccdca07608f78df5a9c33007ee791c0a5967696b0f65bdaed152feddff1ae5\"" Feb 9 10:09:27.150556 env[1141]: time="2024-02-09T10:09:27.150467780Z" level=info msg="StartContainer for \"cfccdca07608f78df5a9c33007ee791c0a5967696b0f65bdaed152feddff1ae5\"" Feb 9 10:09:27.162441 env[1141]: time="2024-02-09T10:09:27.162392273Z" level=info msg="CreateContainer within sandbox \"ff5b8a4d95100860c8dbc0cf5ff9aaf3334080cc811105186a0e66ea1ca76bc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a9da135c237fb624e6dfd9442910b99084d8e7f0a519914802c58d7ca1e18e7\"" Feb 9 10:09:27.163774 env[1141]: time="2024-02-09T10:09:27.163100967Z" level=info msg="StartContainer for \"5a9da135c237fb624e6dfd9442910b99084d8e7f0a519914802c58d7ca1e18e7\"" Feb 9 10:09:27.166412 systemd[1]: Started cri-containerd-cfccdca07608f78df5a9c33007ee791c0a5967696b0f65bdaed152feddff1ae5.scope. Feb 9 10:09:27.185385 systemd[1]: Started cri-containerd-5a9da135c237fb624e6dfd9442910b99084d8e7f0a519914802c58d7ca1e18e7.scope. Feb 9 10:09:27.220928 env[1141]: time="2024-02-09T10:09:27.220885909Z" level=info msg="StartContainer for \"cfccdca07608f78df5a9c33007ee791c0a5967696b0f65bdaed152feddff1ae5\" returns successfully" Feb 9 10:09:27.228235 env[1141]: time="2024-02-09T10:09:27.228191903Z" level=info msg="StartContainer for \"5a9da135c237fb624e6dfd9442910b99084d8e7f0a519914802c58d7ca1e18e7\" returns successfully" Feb 9 10:09:27.653219 kubelet[1981]: E0209 10:09:27.653192 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:27.654467 kubelet[1981]: E0209 10:09:27.654399 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:27.662470 kubelet[1981]: I0209 10:09:27.662124 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hlntz" podStartSLOduration=18.662082994 podCreationTimestamp="2024-02-09 10:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:09:27.661365379 +0000 UTC m=+32.191995399" watchObservedRunningTime="2024-02-09 10:09:27.662082994 +0000 UTC m=+32.192713014" Feb 9 10:09:27.683313 kubelet[1981]: I0209 10:09:27.683275 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cxmb4" podStartSLOduration=18.683240481 podCreationTimestamp="2024-02-09 10:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:09:27.675185151 +0000 UTC m=+32.205815131" watchObservedRunningTime="2024-02-09 10:09:27.683240481 +0000 UTC m=+32.213870501" Feb 9 10:09:28.046448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038074781.mount: Deactivated successfully. Feb 9 10:09:28.655713 kubelet[1981]: E0209 10:09:28.655680 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:28.655713 kubelet[1981]: E0209 10:09:28.655712 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:28.740176 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:53148.service. Feb 9 10:09:28.784084 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 53148 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:28.785488 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:28.789190 systemd-logind[1130]: New session 7 of user core. Feb 9 10:09:28.789652 systemd[1]: Started session-7.scope. Feb 9 10:09:28.908033 sshd[3373]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:28.910697 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:53148.service: Deactivated successfully. Feb 9 10:09:28.911502 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 10:09:28.912077 systemd-logind[1130]: Session 7 logged out. Waiting for processes to exit. Feb 9 10:09:28.912744 systemd-logind[1130]: Removed session 7. Feb 9 10:09:29.657069 kubelet[1981]: E0209 10:09:29.657040 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:29.657462 kubelet[1981]: E0209 10:09:29.657326 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:30.710438 kubelet[1981]: I0209 10:09:30.710398 1981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 10:09:30.711233 kubelet[1981]: E0209 10:09:30.711218 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:31.661294 kubelet[1981]: E0209 10:09:31.661261 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:33.912696 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:38378.service. Feb 9 10:09:33.953779 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 38378 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:33.955279 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:33.958867 systemd-logind[1130]: New session 8 of user core. Feb 9 10:09:33.959522 systemd[1]: Started session-8.scope. Feb 9 10:09:34.074951 sshd[3387]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:34.079023 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:38388.service. Feb 9 10:09:34.079587 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:38378.service: Deactivated successfully. Feb 9 10:09:34.080465 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 10:09:34.081366 systemd-logind[1130]: Session 8 logged out. Waiting for processes to exit. Feb 9 10:09:34.082231 systemd-logind[1130]: Removed session 8. Feb 9 10:09:34.123339 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 38388 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:34.124610 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:34.128924 systemd[1]: Started session-9.scope. Feb 9 10:09:34.129089 systemd-logind[1130]: New session 9 of user core. Feb 9 10:09:34.814136 sshd[3400]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:34.816094 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:38396.service. Feb 9 10:09:34.818764 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:38388.service: Deactivated successfully. Feb 9 10:09:34.819457 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 10:09:34.820338 systemd-logind[1130]: Session 9 logged out. Waiting for processes to exit. Feb 9 10:09:34.821303 systemd-logind[1130]: Removed session 9. Feb 9 10:09:34.861807 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 38396 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:34.863106 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:34.867523 systemd[1]: Started session-10.scope. Feb 9 10:09:34.868039 systemd-logind[1130]: New session 10 of user core. Feb 9 10:09:34.976778 sshd[3412]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:34.979469 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:38396.service: Deactivated successfully. Feb 9 10:09:34.980240 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 10:09:34.980754 systemd-logind[1130]: Session 10 logged out. Waiting for processes to exit. Feb 9 10:09:34.981600 systemd-logind[1130]: Removed session 10. Feb 9 10:09:39.982019 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:38408.service. Feb 9 10:09:40.023596 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 38408 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:40.025325 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:40.028639 systemd-logind[1130]: New session 11 of user core. Feb 9 10:09:40.029585 systemd[1]: Started session-11.scope. Feb 9 10:09:40.141995 sshd[3426]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:40.145837 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:38408.service: Deactivated successfully. Feb 9 10:09:40.148354 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 10:09:40.148893 systemd-logind[1130]: Session 11 logged out. Waiting for processes to exit. Feb 9 10:09:40.149514 systemd-logind[1130]: Removed session 11. Feb 9 10:09:45.146047 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:33592.service. Feb 9 10:09:45.191630 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:45.192071 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:45.196924 systemd-logind[1130]: New session 12 of user core. Feb 9 10:09:45.198933 systemd[1]: Started session-12.scope. Feb 9 10:09:45.349618 sshd[3442]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:45.353635 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:33596.service. Feb 9 10:09:45.356453 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:33592.service: Deactivated successfully. Feb 9 10:09:45.357187 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 10:09:45.357708 systemd-logind[1130]: Session 12 logged out. Waiting for processes to exit. Feb 9 10:09:45.358404 systemd-logind[1130]: Removed session 12. Feb 9 10:09:45.397189 sshd[3454]: Accepted publickey for core from 10.0.0.1 port 33596 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:45.398336 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:45.403078 systemd-logind[1130]: New session 13 of user core. Feb 9 10:09:45.404563 systemd[1]: Started session-13.scope. Feb 9 10:09:45.588010 sshd[3454]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:45.591873 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:33606.service. Feb 9 10:09:45.593065 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:33596.service: Deactivated successfully. Feb 9 10:09:45.593766 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 10:09:45.594367 systemd-logind[1130]: Session 13 logged out. Waiting for processes to exit. Feb 9 10:09:45.594960 systemd-logind[1130]: Removed session 13. Feb 9 10:09:45.634942 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 33606 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:45.636131 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:45.640073 systemd[1]: Started session-14.scope. Feb 9 10:09:45.640388 systemd-logind[1130]: New session 14 of user core. Feb 9 10:09:46.555489 sshd[3465]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:46.559294 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:33614.service. Feb 9 10:09:46.561064 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:33606.service: Deactivated successfully. Feb 9 10:09:46.561648 systemd-logind[1130]: Session 14 logged out. Waiting for processes to exit. Feb 9 10:09:46.562107 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 10:09:46.562823 systemd-logind[1130]: Removed session 14. Feb 9 10:09:46.604323 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 33614 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:46.605612 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:46.609892 systemd[1]: Started session-15.scope. Feb 9 10:09:46.610035 systemd-logind[1130]: New session 15 of user core. Feb 9 10:09:46.883555 sshd[3484]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:46.886649 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:33614.service: Deactivated successfully. Feb 9 10:09:46.887275 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 10:09:46.887900 systemd-logind[1130]: Session 15 logged out. Waiting for processes to exit. Feb 9 10:09:46.889529 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:33630.service. Feb 9 10:09:46.890481 systemd-logind[1130]: Removed session 15. Feb 9 10:09:46.933891 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 33630 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:46.935312 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:46.940798 systemd-logind[1130]: New session 16 of user core. Feb 9 10:09:46.941622 systemd[1]: Started session-16.scope. Feb 9 10:09:47.054976 sshd[3498]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:47.057956 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:33630.service: Deactivated successfully. Feb 9 10:09:47.058715 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 10:09:47.059115 systemd-logind[1130]: Session 16 logged out. Waiting for processes to exit. Feb 9 10:09:47.059912 systemd-logind[1130]: Removed session 16. Feb 9 10:09:52.059728 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:33644.service. Feb 9 10:09:52.102588 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:52.104209 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:52.107551 systemd-logind[1130]: New session 17 of user core. Feb 9 10:09:52.108718 systemd[1]: Started session-17.scope. Feb 9 10:09:52.227343 sshd[3515]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:52.230020 systemd-logind[1130]: Session 17 logged out. Waiting for processes to exit. Feb 9 10:09:52.230248 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:33644.service: Deactivated successfully. Feb 9 10:09:52.231004 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 10:09:52.231772 systemd-logind[1130]: Removed session 17. Feb 9 10:09:57.231304 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:49854.service. Feb 9 10:09:57.272118 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 49854 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:57.273538 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:57.277168 systemd-logind[1130]: New session 18 of user core. Feb 9 10:09:57.277614 systemd[1]: Started session-18.scope. Feb 9 10:09:57.386283 sshd[3532]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:57.388625 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:49854.service: Deactivated successfully. Feb 9 10:09:57.389372 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 10:09:57.389902 systemd-logind[1130]: Session 18 logged out. Waiting for processes to exit. Feb 9 10:09:57.390495 systemd-logind[1130]: Removed session 18. Feb 9 10:10:02.391443 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:49864.service. Feb 9 10:10:02.432034 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 49864 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:10:02.433242 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:10:02.437382 systemd[1]: Started session-19.scope. Feb 9 10:10:02.437712 systemd-logind[1130]: New session 19 of user core. Feb 9 10:10:02.547738 sshd[3546]: pam_unix(sshd:session): session closed for user core Feb 9 10:10:02.550078 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:49864.service: Deactivated successfully. Feb 9 10:10:02.550839 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 10:10:02.551360 systemd-logind[1130]: Session 19 logged out. Waiting for processes to exit. Feb 9 10:10:02.552002 systemd-logind[1130]: Removed session 19. Feb 9 10:10:07.553006 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:58586.service. Feb 9 10:10:07.596132 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 58586 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:10:07.597567 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:10:07.601337 systemd-logind[1130]: New session 20 of user core. Feb 9 10:10:07.602295 systemd[1]: Started session-20.scope. Feb 9 10:10:07.710158 sshd[3559]: pam_unix(sshd:session): session closed for user core Feb 9 10:10:07.714091 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:58592.service. Feb 9 10:10:07.714589 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:58586.service: Deactivated successfully. Feb 9 10:10:07.715251 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 10:10:07.715847 systemd-logind[1130]: Session 20 logged out. Waiting for processes to exit. Feb 9 10:10:07.716821 systemd-logind[1130]: Removed session 20. Feb 9 10:10:07.755310 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:10:07.756710 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:10:07.760075 systemd-logind[1130]: New session 21 of user core. Feb 9 10:10:07.761269 systemd[1]: Started session-21.scope. Feb 9 10:10:09.254268 env[1141]: time="2024-02-09T10:10:09.253824322Z" level=info msg="StopContainer for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" with timeout 30 (s)" Feb 9 10:10:09.267663 env[1141]: time="2024-02-09T10:10:09.264658588Z" level=info msg="Stop container \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" with signal terminated" Feb 9 10:10:09.276243 systemd[1]: cri-containerd-e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734.scope: Deactivated successfully. Feb 9 10:10:09.294979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734-rootfs.mount: Deactivated successfully. Feb 9 10:10:09.298414 env[1141]: time="2024-02-09T10:10:09.296537222Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:10:09.304097 env[1141]: time="2024-02-09T10:10:09.304063000Z" level=info msg="StopContainer for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" with timeout 2 (s)" Feb 9 10:10:09.305049 env[1141]: time="2024-02-09T10:10:09.304809081Z" level=info msg="Stop container \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" with signal terminated" Feb 9 10:10:09.308487 env[1141]: time="2024-02-09T10:10:09.305883204Z" level=info msg="shim disconnected" id=e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734 Feb 9 10:10:09.308487 env[1141]: time="2024-02-09T10:10:09.305918364Z" level=warning msg="cleaning up after shim disconnected" id=e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734 namespace=k8s.io Feb 9 10:10:09.308487 env[1141]: time="2024-02-09T10:10:09.305927084Z" level=info msg="cleaning up dead shim" Feb 9 10:10:09.313337 systemd-networkd[1052]: lxc_health: Link DOWN Feb 9 10:10:09.313343 systemd-networkd[1052]: lxc_health: Lost carrier Feb 9 10:10:09.315260 env[1141]: time="2024-02-09T10:10:09.315211946Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3621 runtime=io.containerd.runc.v2\n" Feb 9 10:10:09.317431 env[1141]: time="2024-02-09T10:10:09.317321551Z" level=info msg="StopContainer for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" returns successfully" Feb 9 10:10:09.318013 env[1141]: time="2024-02-09T10:10:09.317957552Z" level=info msg="StopPodSandbox for \"f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b\"" Feb 9 10:10:09.318144 env[1141]: time="2024-02-09T10:10:09.318019912Z" level=info msg="Container to stop \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.319433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b-shm.mount: Deactivated successfully. Feb 9 10:10:09.329168 systemd[1]: cri-containerd-f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b.scope: Deactivated successfully. Feb 9 10:10:09.347169 systemd[1]: cri-containerd-c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a.scope: Deactivated successfully. Feb 9 10:10:09.347478 systemd[1]: cri-containerd-c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a.scope: Consumed 6.737s CPU time. Feb 9 10:10:09.353248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b-rootfs.mount: Deactivated successfully. Feb 9 10:10:09.363420 env[1141]: time="2024-02-09T10:10:09.363370498Z" level=info msg="shim disconnected" id=f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b Feb 9 10:10:09.363569 env[1141]: time="2024-02-09T10:10:09.363423618Z" level=warning msg="cleaning up after shim disconnected" id=f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b namespace=k8s.io Feb 9 10:10:09.363569 env[1141]: time="2024-02-09T10:10:09.363435378Z" level=info msg="cleaning up dead shim" Feb 9 10:10:09.366771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a-rootfs.mount: Deactivated successfully. Feb 9 10:10:09.371954 env[1141]: time="2024-02-09T10:10:09.371909718Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3671 runtime=io.containerd.runc.v2\n" Feb 9 10:10:09.372252 env[1141]: time="2024-02-09T10:10:09.372216319Z" level=info msg="TearDown network for sandbox \"f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b\" successfully" Feb 9 10:10:09.372252 env[1141]: time="2024-02-09T10:10:09.372240799Z" level=info msg="StopPodSandbox for \"f7da83368ee3acbeee8ef5eefef9b43ac416b8b553f71044a327ca5b60e3c93b\" returns successfully" Feb 9 10:10:09.374490 env[1141]: time="2024-02-09T10:10:09.374397724Z" level=info msg="shim disconnected" id=c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a Feb 9 10:10:09.374490 env[1141]: time="2024-02-09T10:10:09.374434964Z" level=warning msg="cleaning up after shim disconnected" id=c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a namespace=k8s.io Feb 9 10:10:09.374490 env[1141]: time="2024-02-09T10:10:09.374450004Z" level=info msg="cleaning up dead shim" Feb 9 10:10:09.385405 env[1141]: time="2024-02-09T10:10:09.385354230Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3683 runtime=io.containerd.runc.v2\n" Feb 9 10:10:09.387746 env[1141]: time="2024-02-09T10:10:09.387693075Z" level=info msg="StopContainer for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" returns successfully" Feb 9 10:10:09.388228 env[1141]: time="2024-02-09T10:10:09.388185556Z" level=info msg="StopPodSandbox for \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\"" Feb 9 10:10:09.388292 env[1141]: time="2024-02-09T10:10:09.388263796Z" level=info msg="Container to stop \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.388327 env[1141]: time="2024-02-09T10:10:09.388297477Z" level=info msg="Container to stop \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.388327 env[1141]: time="2024-02-09T10:10:09.388312117Z" level=info msg="Container to stop \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.388467 env[1141]: time="2024-02-09T10:10:09.388323957Z" level=info msg="Container to stop \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.388467 env[1141]: time="2024-02-09T10:10:09.388335157Z" level=info msg="Container to stop \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.393495 systemd[1]: cri-containerd-9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b.scope: Deactivated successfully. Feb 9 10:10:09.419556 env[1141]: time="2024-02-09T10:10:09.419508589Z" level=info msg="shim disconnected" id=9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b Feb 9 10:10:09.419863 env[1141]: time="2024-02-09T10:10:09.419840190Z" level=warning msg="cleaning up after shim disconnected" id=9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b namespace=k8s.io Feb 9 10:10:09.419987 env[1141]: time="2024-02-09T10:10:09.419970951Z" level=info msg="cleaning up dead shim" Feb 9 10:10:09.427874 env[1141]: time="2024-02-09T10:10:09.427790049Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3714 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:10:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 10:10:09.428155 env[1141]: time="2024-02-09T10:10:09.428131810Z" level=info msg="TearDown network for sandbox \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" successfully" Feb 9 10:10:09.428205 env[1141]: time="2024-02-09T10:10:09.428156810Z" level=info msg="StopPodSandbox for \"9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b\" returns successfully" Feb 9 10:10:09.434665 kubelet[1981]: I0209 10:10:09.434626 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-cilium-config-path\") pod \"ed9385fe-b3dd-44b1-9be6-cac614e5c5fe\" (UID: \"ed9385fe-b3dd-44b1-9be6-cac614e5c5fe\") " Feb 9 10:10:09.435003 kubelet[1981]: I0209 10:10:09.434707 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmqbn\" (UniqueName: \"kubernetes.io/projected/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-kube-api-access-lmqbn\") pod \"ed9385fe-b3dd-44b1-9be6-cac614e5c5fe\" (UID: \"ed9385fe-b3dd-44b1-9be6-cac614e5c5fe\") " Feb 9 10:10:09.440319 kubelet[1981]: I0209 10:10:09.440276 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-kube-api-access-lmqbn" (OuterVolumeSpecName: "kube-api-access-lmqbn") pod "ed9385fe-b3dd-44b1-9be6-cac614e5c5fe" (UID: "ed9385fe-b3dd-44b1-9be6-cac614e5c5fe"). InnerVolumeSpecName "kube-api-access-lmqbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:09.442864 kubelet[1981]: I0209 10:10:09.442833 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed9385fe-b3dd-44b1-9be6-cac614e5c5fe" (UID: "ed9385fe-b3dd-44b1-9be6-cac614e5c5fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:10:09.535975 kubelet[1981]: I0209 10:10:09.535865 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-cgroup\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.535975 kubelet[1981]: I0209 10:10:09.535905 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-run\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.535975 kubelet[1981]: I0209 10:10:09.535928 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cni-path\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.535975 kubelet[1981]: I0209 10:10:09.535955 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-config-path\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.535975 kubelet[1981]: I0209 10:10:09.535976 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-etc-cni-netd\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536190 kubelet[1981]: I0209 10:10:09.535994 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-hostproc\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536190 kubelet[1981]: I0209 10:10:09.536016 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-hubble-tls\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536190 kubelet[1981]: I0209 10:10:09.536039 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d665afb-7ae4-4070-9f31-7e377210da75-clustermesh-secrets\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536190 kubelet[1981]: I0209 10:10:09.536057 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-lib-modules\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536190 kubelet[1981]: I0209 10:10:09.536074 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-xtables-lock\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536190 kubelet[1981]: I0209 10:10:09.536090 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-net\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536461 kubelet[1981]: I0209 10:10:09.536108 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-kernel\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536461 kubelet[1981]: I0209 10:10:09.536129 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62td\" (UniqueName: \"kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-kube-api-access-x62td\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536461 kubelet[1981]: I0209 10:10:09.536149 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-bpf-maps\") pod \"6d665afb-7ae4-4070-9f31-7e377210da75\" (UID: \"6d665afb-7ae4-4070-9f31-7e377210da75\") " Feb 9 10:10:09.536461 kubelet[1981]: I0209 10:10:09.536192 1981 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lmqbn\" (UniqueName: \"kubernetes.io/projected/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-kube-api-access-lmqbn\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.536461 kubelet[1981]: I0209 10:10:09.536205 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.536461 kubelet[1981]: I0209 10:10:09.536233 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.536608 kubelet[1981]: I0209 10:10:09.536430 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.536608 kubelet[1981]: I0209 10:10:09.536464 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.536792 kubelet[1981]: I0209 10:10:09.536703 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.536792 kubelet[1981]: I0209 10:10:09.536742 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.537042 kubelet[1981]: I0209 10:10:09.536892 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.537042 kubelet[1981]: I0209 10:10:09.536758 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.537042 kubelet[1981]: I0209 10:10:09.536953 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.537042 kubelet[1981]: I0209 10:10:09.536971 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.537042 kubelet[1981]: I0209 10:10:09.536989 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.539798 kubelet[1981]: I0209 10:10:09.539765 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-kube-api-access-x62td" (OuterVolumeSpecName: "kube-api-access-x62td") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "kube-api-access-x62td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:09.540144 kubelet[1981]: I0209 10:10:09.540124 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:09.542140 kubelet[1981]: I0209 10:10:09.542107 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:10:09.542258 kubelet[1981]: I0209 10:10:09.542222 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d665afb-7ae4-4070-9f31-7e377210da75-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d665afb-7ae4-4070-9f31-7e377210da75" (UID: "6d665afb-7ae4-4070-9f31-7e377210da75"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:10:09.563200 kubelet[1981]: E0209 10:10:09.563176 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:09.568784 systemd[1]: Removed slice kubepods-besteffort-poded9385fe_b3dd_44b1_9be6_cac614e5c5fe.slice. Feb 9 10:10:09.571043 systemd[1]: Removed slice kubepods-burstable-pod6d665afb_7ae4_4070_9f31_7e377210da75.slice. Feb 9 10:10:09.571121 systemd[1]: kubepods-burstable-pod6d665afb_7ae4_4070_9f31_7e377210da75.slice: Consumed 6.982s CPU time. Feb 9 10:10:09.636757 kubelet[1981]: I0209 10:10:09.636720 1981 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636757 kubelet[1981]: I0209 10:10:09.636753 1981 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d665afb-7ae4-4070-9f31-7e377210da75-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636773 1981 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636783 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636793 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636808 1981 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x62td\" (UniqueName: \"kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-kube-api-access-x62td\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636840 1981 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636849 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636858 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.636922 kubelet[1981]: I0209 10:10:09.636868 1981 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d665afb-7ae4-4070-9f31-7e377210da75-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.637104 kubelet[1981]: I0209 10:10:09.636879 1981 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.637104 kubelet[1981]: I0209 10:10:09.636888 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d665afb-7ae4-4070-9f31-7e377210da75-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.637104 kubelet[1981]: I0209 10:10:09.636898 1981 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.637104 kubelet[1981]: I0209 10:10:09.636914 1981 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d665afb-7ae4-4070-9f31-7e377210da75-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:09.728855 kubelet[1981]: I0209 10:10:09.728822 1981 scope.go:117] "RemoveContainer" containerID="e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734" Feb 9 10:10:09.731605 env[1141]: time="2024-02-09T10:10:09.731570119Z" level=info msg="RemoveContainer for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\"" Feb 9 10:10:09.736056 env[1141]: time="2024-02-09T10:10:09.736023009Z" level=info msg="RemoveContainer for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" returns successfully" Feb 9 10:10:09.736852 kubelet[1981]: I0209 10:10:09.736830 1981 scope.go:117] "RemoveContainer" containerID="e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734" Feb 9 10:10:09.737189 env[1141]: time="2024-02-09T10:10:09.737070211Z" level=error msg="ContainerStatus for \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\": not found" Feb 9 10:10:09.737612 kubelet[1981]: E0209 10:10:09.737543 1981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\": not found" containerID="e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734" Feb 9 10:10:09.737823 kubelet[1981]: I0209 10:10:09.737797 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734"} err="failed to get container status \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\": rpc error: code = NotFound desc = an error occurred when try to find container \"e377ac7637f5a81d8bf76296c60c67510a4b203886a8bc58b5c9924de4a67734\": not found" Feb 9 10:10:09.737865 kubelet[1981]: I0209 10:10:09.737827 1981 scope.go:117] "RemoveContainer" containerID="c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a" Feb 9 10:10:09.739527 env[1141]: time="2024-02-09T10:10:09.739497257Z" level=info msg="RemoveContainer for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\"" Feb 9 10:10:09.742089 env[1141]: time="2024-02-09T10:10:09.742057663Z" level=info msg="RemoveContainer for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" returns successfully" Feb 9 10:10:09.742321 kubelet[1981]: I0209 10:10:09.742298 1981 scope.go:117] "RemoveContainer" containerID="5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31" Feb 9 10:10:09.744379 env[1141]: time="2024-02-09T10:10:09.744349868Z" level=info msg="RemoveContainer for \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\"" Feb 9 10:10:09.746865 env[1141]: time="2024-02-09T10:10:09.746835834Z" level=info msg="RemoveContainer for \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\" returns successfully" Feb 9 10:10:09.747121 kubelet[1981]: I0209 10:10:09.747089 1981 scope.go:117] "RemoveContainer" containerID="53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b" Feb 9 10:10:09.748992 env[1141]: time="2024-02-09T10:10:09.748964879Z" level=info msg="RemoveContainer for \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\"" Feb 9 10:10:09.752401 env[1141]: time="2024-02-09T10:10:09.752368847Z" level=info msg="RemoveContainer for \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\" returns successfully" Feb 9 10:10:09.752679 kubelet[1981]: I0209 10:10:09.752654 1981 scope.go:117] "RemoveContainer" containerID="5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854" Feb 9 10:10:09.753944 env[1141]: time="2024-02-09T10:10:09.753916451Z" level=info msg="RemoveContainer for \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\"" Feb 9 10:10:09.756069 env[1141]: time="2024-02-09T10:10:09.756035016Z" level=info msg="RemoveContainer for \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\" returns successfully" Feb 9 10:10:09.756219 kubelet[1981]: I0209 10:10:09.756195 1981 scope.go:117] "RemoveContainer" containerID="f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa" Feb 9 10:10:09.757247 env[1141]: time="2024-02-09T10:10:09.757223419Z" level=info msg="RemoveContainer for \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\"" Feb 9 10:10:09.759582 env[1141]: time="2024-02-09T10:10:09.759543024Z" level=info msg="RemoveContainer for \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\" returns successfully" Feb 9 10:10:09.759798 kubelet[1981]: I0209 10:10:09.759779 1981 scope.go:117] "RemoveContainer" containerID="c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a" Feb 9 10:10:09.760010 env[1141]: time="2024-02-09T10:10:09.759955265Z" level=error msg="ContainerStatus for \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\": not found" Feb 9 10:10:09.760118 kubelet[1981]: E0209 10:10:09.760099 1981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\": not found" containerID="c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a" Feb 9 10:10:09.760164 kubelet[1981]: I0209 10:10:09.760153 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a"} err="failed to get container status \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c13a7ef8f3210a29a875f54183004dbe9958f4c8153764a3f844dfd6dca0696a\": not found" Feb 9 10:10:09.760200 kubelet[1981]: I0209 10:10:09.760167 1981 scope.go:117] "RemoveContainer" containerID="5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31" Feb 9 10:10:09.760408 env[1141]: time="2024-02-09T10:10:09.760357146Z" level=error msg="ContainerStatus for \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\": not found" Feb 9 10:10:09.760650 kubelet[1981]: E0209 10:10:09.760617 1981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\": not found" containerID="5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31" Feb 9 10:10:09.760700 kubelet[1981]: I0209 10:10:09.760664 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31"} err="failed to get container status \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b28e14d8b631d8dec238fcd3cd62d19b88b0a0a490a40a64d9ab70b9198ee31\": not found" Feb 9 10:10:09.760700 kubelet[1981]: I0209 10:10:09.760675 1981 scope.go:117] "RemoveContainer" containerID="53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b" Feb 9 10:10:09.760886 env[1141]: time="2024-02-09T10:10:09.760838987Z" level=error msg="ContainerStatus for \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\": not found" Feb 9 10:10:09.761002 kubelet[1981]: E0209 10:10:09.760984 1981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\": not found" containerID="53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b" Feb 9 10:10:09.761080 kubelet[1981]: I0209 10:10:09.761015 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b"} err="failed to get container status \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\": rpc error: code = NotFound desc = an error occurred when try to find container \"53a859fc31a390adc4cf5ccfb20d18f4d09e54fdbb39843601c6179a01ef983b\": not found" Feb 9 10:10:09.761080 kubelet[1981]: I0209 10:10:09.761026 1981 scope.go:117] "RemoveContainer" containerID="5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854" Feb 9 10:10:09.761410 env[1141]: time="2024-02-09T10:10:09.761363228Z" level=error msg="ContainerStatus for \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\": not found" Feb 9 10:10:09.761875 kubelet[1981]: E0209 10:10:09.761857 1981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\": not found" containerID="5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854" Feb 9 10:10:09.761999 kubelet[1981]: I0209 10:10:09.761985 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854"} err="failed to get container status \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\": rpc error: code = NotFound desc = an error occurred when try to find container \"5628beeeaa552f7f8bcf6542edb44b2fd5c496ff868a03afdba3107e6ed56854\": not found" Feb 9 10:10:09.762085 kubelet[1981]: I0209 10:10:09.762074 1981 scope.go:117] "RemoveContainer" containerID="f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa" Feb 9 10:10:09.762359 env[1141]: time="2024-02-09T10:10:09.762306470Z" level=error msg="ContainerStatus for \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\": not found" Feb 9 10:10:09.762486 kubelet[1981]: E0209 10:10:09.762469 1981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\": not found" containerID="f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa" Feb 9 10:10:09.762537 kubelet[1981]: I0209 10:10:09.762498 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa"} err="failed to get container status \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3069c82281bbfb9575898ae068c6c76571ef2868bd1408636e1904fd348fcfa\": not found" Feb 9 10:10:10.260578 systemd[1]: var-lib-kubelet-pods-ed9385fe\x2db3dd\x2d44b1\x2d9be6\x2dcac614e5c5fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlmqbn.mount: Deactivated successfully. Feb 9 10:10:10.260684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b-rootfs.mount: Deactivated successfully. Feb 9 10:10:10.260735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d39c1694fcb4520398649ca5f523dccea2f4b15bcfc2f0cfe525f2730f1c00b-shm.mount: Deactivated successfully. Feb 9 10:10:10.260790 systemd[1]: var-lib-kubelet-pods-6d665afb\x2d7ae4\x2d4070\x2d9f31\x2d7e377210da75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx62td.mount: Deactivated successfully. Feb 9 10:10:10.260852 systemd[1]: var-lib-kubelet-pods-6d665afb\x2d7ae4\x2d4070\x2d9f31\x2d7e377210da75-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:10:10.260908 systemd[1]: var-lib-kubelet-pods-6d665afb\x2d7ae4\x2d4070\x2d9f31\x2d7e377210da75-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:10:10.561207 kubelet[1981]: E0209 10:10:10.561111 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:10.614116 kubelet[1981]: E0209 10:10:10.614088 1981 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:10:11.220007 sshd[3571]: pam_unix(sshd:session): session closed for user core Feb 9 10:10:11.222711 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:58592.service: Deactivated successfully. Feb 9 10:10:11.223332 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 10:10:11.223966 systemd-logind[1130]: Session 21 logged out. Waiting for processes to exit. Feb 9 10:10:11.225263 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:58594.service. Feb 9 10:10:11.226622 systemd-logind[1130]: Removed session 21. Feb 9 10:10:11.268784 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 58594 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:10:11.270142 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:10:11.273233 systemd-logind[1130]: New session 22 of user core. Feb 9 10:10:11.274338 systemd[1]: Started session-22.scope. Feb 9 10:10:11.563770 kubelet[1981]: I0209 10:10:11.563722 1981 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" path="/var/lib/kubelet/pods/6d665afb-7ae4-4070-9f31-7e377210da75/volumes" Feb 9 10:10:11.564302 kubelet[1981]: I0209 10:10:11.564286 1981 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ed9385fe-b3dd-44b1-9be6-cac614e5c5fe" path="/var/lib/kubelet/pods/ed9385fe-b3dd-44b1-9be6-cac614e5c5fe/volumes" Feb 9 10:10:12.202415 sshd[3736]: pam_unix(sshd:session): session closed for user core Feb 9 10:10:12.206763 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:58596.service. Feb 9 10:10:12.208908 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:58594.service: Deactivated successfully. Feb 9 10:10:12.209624 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 10:10:12.211128 systemd-logind[1130]: Session 22 logged out. Waiting for processes to exit. Feb 9 10:10:12.214226 systemd-logind[1130]: Removed session 22. Feb 9 10:10:12.225934 kubelet[1981]: I0209 10:10:12.222889 1981 topology_manager.go:215] "Topology Admit Handler" podUID="c5f66be5-6047-472c-b8e5-d81900c2a4e0" podNamespace="kube-system" podName="cilium-4qdmj" Feb 9 10:10:12.225934 kubelet[1981]: E0209 10:10:12.222946 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed9385fe-b3dd-44b1-9be6-cac614e5c5fe" containerName="cilium-operator" Feb 9 10:10:12.225934 kubelet[1981]: E0209 10:10:12.222955 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" containerName="cilium-agent" Feb 9 10:10:12.225934 kubelet[1981]: E0209 10:10:12.222964 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" containerName="mount-cgroup" Feb 9 10:10:12.225934 kubelet[1981]: E0209 10:10:12.222970 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" containerName="apply-sysctl-overwrites" Feb 9 10:10:12.225934 kubelet[1981]: E0209 10:10:12.222976 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" containerName="mount-bpf-fs" Feb 9 10:10:12.225934 kubelet[1981]: E0209 10:10:12.222983 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" containerName="clean-cilium-state" Feb 9 10:10:12.225934 kubelet[1981]: I0209 10:10:12.223003 1981 memory_manager.go:346] "RemoveStaleState removing state" podUID="6d665afb-7ae4-4070-9f31-7e377210da75" containerName="cilium-agent" Feb 9 10:10:12.225934 kubelet[1981]: I0209 10:10:12.223010 1981 memory_manager.go:346] "RemoveStaleState removing state" podUID="ed9385fe-b3dd-44b1-9be6-cac614e5c5fe" containerName="cilium-operator" Feb 9 10:10:12.229282 kubelet[1981]: W0209 10:10:12.229075 1981 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 10:10:12.229532 kubelet[1981]: E0209 10:10:12.229510 1981 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 10:10:12.231848 systemd[1]: Created slice kubepods-burstable-podc5f66be5_6047_472c_b8e5_d81900c2a4e0.slice. Feb 9 10:10:12.257938 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 58596 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:10:12.259186 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:10:12.262263 systemd-logind[1130]: New session 23 of user core. Feb 9 10:10:12.263102 systemd[1]: Started session-23.scope. Feb 9 10:10:12.349927 kubelet[1981]: I0209 10:10:12.349223 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cni-path\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.349927 kubelet[1981]: I0209 10:10:12.349337 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8dr\" (UniqueName: \"kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-kube-api-access-tv8dr\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.349927 kubelet[1981]: I0209 10:10:12.349373 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hostproc\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.349927 kubelet[1981]: I0209 10:10:12.349434 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hubble-tls\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.349927 kubelet[1981]: I0209 10:10:12.349496 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-xtables-lock\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.349927 kubelet[1981]: I0209 10:10:12.349520 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-ipsec-secrets\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350199 kubelet[1981]: I0209 10:10:12.349577 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-net\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350199 kubelet[1981]: I0209 10:10:12.349598 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-bpf-maps\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350199 kubelet[1981]: I0209 10:10:12.349648 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-lib-modules\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350199 kubelet[1981]: I0209 10:10:12.349669 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-clustermesh-secrets\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350199 kubelet[1981]: I0209 10:10:12.349711 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-etc-cni-netd\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350199 kubelet[1981]: I0209 10:10:12.349731 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-config-path\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350329 kubelet[1981]: I0209 10:10:12.349788 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-run\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350329 kubelet[1981]: I0209 10:10:12.349841 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-kernel\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.350329 kubelet[1981]: I0209 10:10:12.349864 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-cgroup\") pod \"cilium-4qdmj\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " pod="kube-system/cilium-4qdmj" Feb 9 10:10:12.391220 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:58612.service. Feb 9 10:10:12.392833 kubelet[1981]: E0209 10:10:12.392672 1981 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-tv8dr lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-4qdmj" podUID="c5f66be5-6047-472c-b8e5-d81900c2a4e0" Feb 9 10:10:12.392833 sshd[3749]: pam_unix(sshd:session): session closed for user core Feb 9 10:10:12.397648 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:58596.service: Deactivated successfully. Feb 9 10:10:12.398385 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 10:10:12.398956 systemd-logind[1130]: Session 23 logged out. Waiting for processes to exit. Feb 9 10:10:12.402930 systemd-logind[1130]: Removed session 23. Feb 9 10:10:12.436616 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 58612 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:10:12.438007 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:10:12.441207 systemd-logind[1130]: New session 24 of user core. Feb 9 10:10:12.442024 systemd[1]: Started session-24.scope. Feb 9 10:10:12.854538 kubelet[1981]: I0209 10:10:12.854491 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-net\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854538 kubelet[1981]: I0209 10:10:12.854541 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hubble-tls\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854901 kubelet[1981]: I0209 10:10:12.854565 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-clustermesh-secrets\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854901 kubelet[1981]: I0209 10:10:12.854589 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-config-path\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854901 kubelet[1981]: I0209 10:10:12.854609 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-run\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854901 kubelet[1981]: I0209 10:10:12.854630 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv8dr\" (UniqueName: \"kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-kube-api-access-tv8dr\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854901 kubelet[1981]: I0209 10:10:12.854647 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hostproc\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.854901 kubelet[1981]: I0209 10:10:12.854667 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-etc-cni-netd\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855040 kubelet[1981]: I0209 10:10:12.854684 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cni-path\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855040 kubelet[1981]: I0209 10:10:12.854700 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-xtables-lock\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855040 kubelet[1981]: I0209 10:10:12.854718 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-kernel\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855040 kubelet[1981]: I0209 10:10:12.854737 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-bpf-maps\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855040 kubelet[1981]: I0209 10:10:12.854755 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-cgroup\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855040 kubelet[1981]: I0209 10:10:12.854773 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-lib-modules\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:12.855170 kubelet[1981]: I0209 10:10:12.854870 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855170 kubelet[1981]: I0209 10:10:12.854894 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855170 kubelet[1981]: I0209 10:10:12.854915 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855170 kubelet[1981]: I0209 10:10:12.854902 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855170 kubelet[1981]: I0209 10:10:12.854932 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855276 kubelet[1981]: I0209 10:10:12.854943 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855276 kubelet[1981]: I0209 10:10:12.854949 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855276 kubelet[1981]: I0209 10:10:12.854965 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855276 kubelet[1981]: I0209 10:10:12.854980 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.855276 kubelet[1981]: I0209 10:10:12.855260 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.856821 kubelet[1981]: I0209 10:10:12.856764 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:10:12.857942 kubelet[1981]: I0209 10:10:12.857911 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:10:12.858555 systemd[1]: var-lib-kubelet-pods-c5f66be5\x2d6047\x2d472c\x2db8e5\x2dd81900c2a4e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtv8dr.mount: Deactivated successfully. Feb 9 10:10:12.858656 systemd[1]: var-lib-kubelet-pods-c5f66be5\x2d6047\x2d472c\x2db8e5\x2dd81900c2a4e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:10:12.858711 systemd[1]: var-lib-kubelet-pods-c5f66be5\x2d6047\x2d472c\x2db8e5\x2dd81900c2a4e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:10:12.858935 kubelet[1981]: I0209 10:10:12.858914 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:12.859085 kubelet[1981]: I0209 10:10:12.859067 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-kube-api-access-tv8dr" (OuterVolumeSpecName: "kube-api-access-tv8dr") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "kube-api-access-tv8dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:12.955742 kubelet[1981]: I0209 10:10:12.955712 1981 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tv8dr\" (UniqueName: \"kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-kube-api-access-tv8dr\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.955965 kubelet[1981]: I0209 10:10:12.955951 1981 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956032 kubelet[1981]: I0209 10:10:12.956022 1981 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956089 kubelet[1981]: I0209 10:10:12.956080 1981 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956156 kubelet[1981]: I0209 10:10:12.956146 1981 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956217 kubelet[1981]: I0209 10:10:12.956209 1981 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956277 kubelet[1981]: I0209 10:10:12.956268 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956332 kubelet[1981]: I0209 10:10:12.956323 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956386 kubelet[1981]: I0209 10:10:12.956378 1981 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956452 kubelet[1981]: I0209 10:10:12.956432 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956510 kubelet[1981]: I0209 10:10:12.956501 1981 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5f66be5-6047-472c-b8e5-d81900c2a4e0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956562 kubelet[1981]: I0209 10:10:12.956554 1981 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956619 kubelet[1981]: I0209 10:10:12.956611 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:12.956780 kubelet[1981]: I0209 10:10:12.956766 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:13.257767 kubelet[1981]: I0209 10:10:13.257728 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-ipsec-secrets\") pod \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\" (UID: \"c5f66be5-6047-472c-b8e5-d81900c2a4e0\") " Feb 9 10:10:13.260373 kubelet[1981]: I0209 10:10:13.260332 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c5f66be5-6047-472c-b8e5-d81900c2a4e0" (UID: "c5f66be5-6047-472c-b8e5-d81900c2a4e0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:10:13.358691 kubelet[1981]: I0209 10:10:13.358646 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5f66be5-6047-472c-b8e5-d81900c2a4e0-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 10:10:13.566932 systemd[1]: Removed slice kubepods-burstable-podc5f66be5_6047_472c_b8e5_d81900c2a4e0.slice. Feb 9 10:10:13.776378 kubelet[1981]: I0209 10:10:13.776341 1981 topology_manager.go:215] "Topology Admit Handler" podUID="345b270a-a286-4873-8bf6-4b953750774b" podNamespace="kube-system" podName="cilium-472fk" Feb 9 10:10:13.784041 systemd[1]: Created slice kubepods-burstable-pod345b270a_a286_4873_8bf6_4b953750774b.slice. Feb 9 10:10:13.862640 kubelet[1981]: I0209 10:10:13.862538 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-xtables-lock\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.862640 kubelet[1981]: I0209 10:10:13.862629 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-cilium-cgroup\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863010 kubelet[1981]: I0209 10:10:13.862669 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-lib-modules\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863010 kubelet[1981]: I0209 10:10:13.862694 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-hostproc\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863010 kubelet[1981]: I0209 10:10:13.862713 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-cni-path\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863010 kubelet[1981]: I0209 10:10:13.862735 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/345b270a-a286-4873-8bf6-4b953750774b-cilium-config-path\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863010 kubelet[1981]: I0209 10:10:13.862756 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7skg\" (UniqueName: \"kubernetes.io/projected/345b270a-a286-4873-8bf6-4b953750774b-kube-api-access-j7skg\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863010 kubelet[1981]: I0209 10:10:13.862776 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-etc-cni-netd\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863147 kubelet[1981]: I0209 10:10:13.862796 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-host-proc-sys-kernel\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863147 kubelet[1981]: I0209 10:10:13.862844 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/345b270a-a286-4873-8bf6-4b953750774b-clustermesh-secrets\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863147 kubelet[1981]: I0209 10:10:13.862866 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-host-proc-sys-net\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863147 kubelet[1981]: I0209 10:10:13.862919 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-bpf-maps\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863147 kubelet[1981]: I0209 10:10:13.862955 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/345b270a-a286-4873-8bf6-4b953750774b-cilium-run\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863147 kubelet[1981]: I0209 10:10:13.863019 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/345b270a-a286-4873-8bf6-4b953750774b-cilium-ipsec-secrets\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:13.863279 kubelet[1981]: I0209 10:10:13.863132 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/345b270a-a286-4873-8bf6-4b953750774b-hubble-tls\") pod \"cilium-472fk\" (UID: \"345b270a-a286-4873-8bf6-4b953750774b\") " pod="kube-system/cilium-472fk" Feb 9 10:10:14.085966 kubelet[1981]: E0209 10:10:14.085921 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.086468 env[1141]: time="2024-02-09T10:10:14.086415805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-472fk,Uid:345b270a-a286-4873-8bf6-4b953750774b,Namespace:kube-system,Attempt:0,}" Feb 9 10:10:14.097725 env[1141]: time="2024-02-09T10:10:14.097658162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:10:14.097725 env[1141]: time="2024-02-09T10:10:14.097701523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:10:14.097725 env[1141]: time="2024-02-09T10:10:14.097713083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:10:14.098016 env[1141]: time="2024-02-09T10:10:14.097981243Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521 pid=3792 runtime=io.containerd.runc.v2 Feb 9 10:10:14.108090 systemd[1]: Started cri-containerd-3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521.scope. Feb 9 10:10:14.139675 env[1141]: time="2024-02-09T10:10:14.139560500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-472fk,Uid:345b270a-a286-4873-8bf6-4b953750774b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\"" Feb 9 10:10:14.141025 kubelet[1981]: E0209 10:10:14.140859 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.147535 env[1141]: time="2024-02-09T10:10:14.147494166Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:10:14.156505 env[1141]: time="2024-02-09T10:10:14.156462075Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7\"" Feb 9 10:10:14.157140 env[1141]: time="2024-02-09T10:10:14.157107797Z" level=info msg="StartContainer for \"ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7\"" Feb 9 10:10:14.171166 systemd[1]: Started cri-containerd-ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7.scope. Feb 9 10:10:14.208275 env[1141]: time="2024-02-09T10:10:14.208233085Z" level=info msg="StartContainer for \"ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7\" returns successfully" Feb 9 10:10:14.216051 systemd[1]: cri-containerd-ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7.scope: Deactivated successfully. Feb 9 10:10:14.240745 env[1141]: time="2024-02-09T10:10:14.240699872Z" level=info msg="shim disconnected" id=ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7 Feb 9 10:10:14.240745 env[1141]: time="2024-02-09T10:10:14.240747352Z" level=warning msg="cleaning up after shim disconnected" id=ec9089bc927a52e3f322fd5e15b0f5453062ec03180b81e136a389ab16570ff7 namespace=k8s.io Feb 9 10:10:14.240970 env[1141]: time="2024-02-09T10:10:14.240758672Z" level=info msg="cleaning up dead shim" Feb 9 10:10:14.247960 env[1141]: time="2024-02-09T10:10:14.247909575Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3876 runtime=io.containerd.runc.v2\n" Feb 9 10:10:14.750376 kubelet[1981]: E0209 10:10:14.750272 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.755542 env[1141]: time="2024-02-09T10:10:14.755433321Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:10:14.763306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266464837.mount: Deactivated successfully. Feb 9 10:10:14.764984 env[1141]: time="2024-02-09T10:10:14.764942592Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb\"" Feb 9 10:10:14.765403 env[1141]: time="2024-02-09T10:10:14.765359754Z" level=info msg="StartContainer for \"090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb\"" Feb 9 10:10:14.782591 systemd[1]: Started cri-containerd-090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb.scope. Feb 9 10:10:14.810375 env[1141]: time="2024-02-09T10:10:14.810320661Z" level=info msg="StartContainer for \"090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb\" returns successfully" Feb 9 10:10:14.818010 systemd[1]: cri-containerd-090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb.scope: Deactivated successfully. Feb 9 10:10:14.836866 env[1141]: time="2024-02-09T10:10:14.836818348Z" level=info msg="shim disconnected" id=090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb Feb 9 10:10:14.836866 env[1141]: time="2024-02-09T10:10:14.836861748Z" level=warning msg="cleaning up after shim disconnected" id=090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb namespace=k8s.io Feb 9 10:10:14.837057 env[1141]: time="2024-02-09T10:10:14.836871748Z" level=info msg="cleaning up dead shim" Feb 9 10:10:14.842949 env[1141]: time="2024-02-09T10:10:14.842917568Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3938 runtime=io.containerd.runc.v2\n" Feb 9 10:10:15.454742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090bc6867d758f2b27cdd2b8a43b710b361df7492f3cd040183b3afa69768feb-rootfs.mount: Deactivated successfully. Feb 9 10:10:15.563872 kubelet[1981]: I0209 10:10:15.563839 1981 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c5f66be5-6047-472c-b8e5-d81900c2a4e0" path="/var/lib/kubelet/pods/c5f66be5-6047-472c-b8e5-d81900c2a4e0/volumes" Feb 9 10:10:15.615276 kubelet[1981]: E0209 10:10:15.615252 1981 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:10:15.753839 kubelet[1981]: E0209 10:10:15.752997 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:15.757982 env[1141]: time="2024-02-09T10:10:15.756798457Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:10:15.774174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319640579.mount: Deactivated successfully. Feb 9 10:10:15.776840 env[1141]: time="2024-02-09T10:10:15.776786406Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1\"" Feb 9 10:10:15.777332 env[1141]: time="2024-02-09T10:10:15.777302128Z" level=info msg="StartContainer for \"badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1\"" Feb 9 10:10:15.793146 systemd[1]: Started cri-containerd-badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1.scope. Feb 9 10:10:15.834457 systemd[1]: cri-containerd-badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1.scope: Deactivated successfully. Feb 9 10:10:15.838168 env[1141]: time="2024-02-09T10:10:15.837931017Z" level=info msg="StartContainer for \"badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1\" returns successfully" Feb 9 10:10:15.839974 env[1141]: time="2024-02-09T10:10:15.839893184Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod345b270a_a286_4873_8bf6_4b953750774b.slice/cri-containerd-badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1.scope/memory.events\": no such file or directory" Feb 9 10:10:15.861294 env[1141]: time="2024-02-09T10:10:15.861253418Z" level=info msg="shim disconnected" id=badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1 Feb 9 10:10:15.861532 env[1141]: time="2024-02-09T10:10:15.861511698Z" level=warning msg="cleaning up after shim disconnected" id=badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1 namespace=k8s.io Feb 9 10:10:15.861597 env[1141]: time="2024-02-09T10:10:15.861583019Z" level=info msg="cleaning up dead shim" Feb 9 10:10:15.870122 env[1141]: time="2024-02-09T10:10:15.870084088Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Feb 9 10:10:16.454788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-badd9951a1037b75ec0c0bc07cc15fd4803a1685a118deecb6e9eec1c8c3e6d1-rootfs.mount: Deactivated successfully. Feb 9 10:10:16.756327 kubelet[1981]: E0209 10:10:16.756294 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:16.758838 env[1141]: time="2024-02-09T10:10:16.758396522Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:10:16.772301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890615747.mount: Deactivated successfully. Feb 9 10:10:16.774058 env[1141]: time="2024-02-09T10:10:16.774013498Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d\"" Feb 9 10:10:16.774541 env[1141]: time="2024-02-09T10:10:16.774465660Z" level=info msg="StartContainer for \"ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d\"" Feb 9 10:10:16.791184 systemd[1]: Started cri-containerd-ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d.scope. Feb 9 10:10:16.824198 env[1141]: time="2024-02-09T10:10:16.824157880Z" level=info msg="StartContainer for \"ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d\" returns successfully" Feb 9 10:10:16.826952 systemd[1]: cri-containerd-ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d.scope: Deactivated successfully. Feb 9 10:10:16.851062 env[1141]: time="2024-02-09T10:10:16.851007937Z" level=info msg="shim disconnected" id=ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d Feb 9 10:10:16.851062 env[1141]: time="2024-02-09T10:10:16.851055697Z" level=warning msg="cleaning up after shim disconnected" id=ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d namespace=k8s.io Feb 9 10:10:16.851062 env[1141]: time="2024-02-09T10:10:16.851067097Z" level=info msg="cleaning up dead shim" Feb 9 10:10:16.858208 env[1141]: time="2024-02-09T10:10:16.858171123Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4050 runtime=io.containerd.runc.v2\n" Feb 9 10:10:17.454886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee8ac863599d52af97a95740cab5ea2489aad4fdddcc57826858ac22fec2cd7d-rootfs.mount: Deactivated successfully. Feb 9 10:10:17.640338 kubelet[1981]: I0209 10:10:17.640291 1981 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T10:10:17Z","lastTransitionTime":"2024-02-09T10:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 10:10:17.760657 kubelet[1981]: E0209 10:10:17.760627 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:17.763978 env[1141]: time="2024-02-09T10:10:17.763927524Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:10:17.780220 env[1141]: time="2024-02-09T10:10:17.780169786Z" level=info msg="CreateContainer within sandbox \"3ca5db3cf3dd90a5e96a8381c31a88e6deb38897bdc35fd4830c55434d270521\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec0307192fa638d7438939730d1320c680a3765831422bbc156e4512270e0dbe\"" Feb 9 10:10:17.780694 env[1141]: time="2024-02-09T10:10:17.780664708Z" level=info msg="StartContainer for \"ec0307192fa638d7438939730d1320c680a3765831422bbc156e4512270e0dbe\"" Feb 9 10:10:17.794968 systemd[1]: Started cri-containerd-ec0307192fa638d7438939730d1320c680a3765831422bbc156e4512270e0dbe.scope. Feb 9 10:10:17.832246 env[1141]: time="2024-02-09T10:10:17.832196503Z" level=info msg="StartContainer for \"ec0307192fa638d7438939730d1320c680a3765831422bbc156e4512270e0dbe\" returns successfully" Feb 9 10:10:18.074835 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:10:18.765041 kubelet[1981]: E0209 10:10:18.765011 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:20.086978 kubelet[1981]: E0209 10:10:20.086939 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:20.703309 systemd-networkd[1052]: lxc_health: Link UP Feb 9 10:10:20.711843 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:10:20.712182 systemd-networkd[1052]: lxc_health: Gained carrier Feb 9 10:10:22.087681 kubelet[1981]: E0209 10:10:22.087641 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:22.099948 systemd-networkd[1052]: lxc_health: Gained IPv6LL Feb 9 10:10:22.105964 kubelet[1981]: I0209 10:10:22.105913 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-472fk" podStartSLOduration=9.10587713 podCreationTimestamp="2024-02-09 10:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:10:18.78062469 +0000 UTC m=+83.311254710" watchObservedRunningTime="2024-02-09 10:10:22.10587713 +0000 UTC m=+86.636507150" Feb 9 10:10:22.772757 kubelet[1981]: E0209 10:10:22.772702 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:23.562708 kubelet[1981]: E0209 10:10:23.562654 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:23.773491 kubelet[1981]: E0209 10:10:23.773445 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:24.990925 systemd[1]: run-containerd-runc-k8s.io-ec0307192fa638d7438939730d1320c680a3765831422bbc156e4512270e0dbe-runc.c23tbS.mount: Deactivated successfully. Feb 9 10:10:25.561154 kubelet[1981]: E0209 10:10:25.561122 1981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:27.099846 systemd[1]: run-containerd-runc-k8s.io-ec0307192fa638d7438939730d1320c680a3765831422bbc156e4512270e0dbe-runc.pK2cJY.mount: Deactivated successfully. Feb 9 10:10:27.156803 sshd[3762]: pam_unix(sshd:session): session closed for user core Feb 9 10:10:27.159957 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:58612.service: Deactivated successfully. Feb 9 10:10:27.160660 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 10:10:27.161182 systemd-logind[1130]: Session 24 logged out. Waiting for processes to exit. Feb 9 10:10:27.161781 systemd-logind[1130]: Removed session 24.